Monday, June 17, 2019
Command to set the account never expires.
chage -I -1 -m 0 -M 99999 -E -1 gnadig
How to find the Host Version/Release on multiple host in Linux.
Note:
1) Setup a password less Authentication on one of the nodes,
2) And get the nodes list before issueing the command.
3) for i in $(cat hostlist); do ssh -o PasswordAuthentication=no $i "cat /etc/*release | grep 'release 7'" >/dev/null 2>&1; [ $? -eq 0 ] && echo $i; done
1) Setup a password less Authentication on one of the nodes,
2) And get the nodes list before issueing the command.
3) for i in $(cat hostlist); do ssh -o PasswordAuthentication=no $i "cat /etc/*release | grep 'release 7'" >/dev/null 2>&1; [ $? -eq 0 ] && echo $i; done
Thursday, May 2, 2019
command to convert .pub key to pem.
ssh-keygen -f.pub -e -m pem
Wednesday, March 13, 2019
How to clear the FMA From ILOM in Sun/Oralcle X86 system.
Login to ILOM using SSH and go to Fault mgmt shell.
start /SP/faultmgmt/shell
Are you sure you want to start /SP/faultmgmt/shell(y/n)? y
faultmgmtsp> fmadm faulty
It will list the FMA
Use the following command to repair:
fmadm repair eef41566-1f7b-c2a6-cbcd-c29b74a71b34
fmadm repair afbed753-d28c-cd23-a770-8902833305ef
OR
fmadm repair /SYS
fmadm repair /SYS/MB/P0
Note: Once it is cleared issue the following command
faultmgmtsp> fmadm faulty
Regards
Gurudatta N.R
start /SP/faultmgmt/shell
Are you sure you want to start /SP/faultmgmt/shell(y/n)? y
faultmgmtsp> fmadm faulty
It will list the FMA
Use the following command to repair:
fmadm repair eef41566-1f7b-c2a6-cbcd-c29b74a71b34
fmadm repair afbed753-d28c-cd23-a770-8902833305ef
OR
fmadm repair /SYS
fmadm repair /SYS/MB/P0
Note: Once it is cleared issue the following command
faultmgmtsp> fmadm faulty
Regards
Gurudatta N.R
Monday, March 11, 2019
http status codes from https://httpstatuses.com/
1×× Informational
100 Continue
101 Switching Protocols
102 Processing
2×× Success
200 OK
201 Created
202 Accepted
203 Non-authoritative Information
204 No Content
205 Reset Content
206 Partial Content
207 Multi-Status
208 Already Reported
226 IM Used
3×× Redirection
300 Multiple Choices
301 Moved Permanently
302 Found
303 See Other
304 Not Modified
305 Use Proxy
307 Temporary Redirect
308 Permanent Redirect
4×× Client Error
400 Bad Request
401 Unauthorized
402 Payment Required
403 Forbidden
404 Not Found
405 Method Not Allowed
406 Not Acceptable
407 Proxy Authentication Required
408 Request Timeout
409 Conflict
410 Gone
411 Length Required
412 Precondition Failed
413 Payload Too Large
414 Request-URI Too Long
415 Unsupported Media Type
416 Requested Range Not Satisfiable
417 Expectation Failed
418 I'm a teapot
421 Misdirected Request
422 Unprocessable Entity
423 Locked
424 Failed Dependency
426 Upgrade Required
428 Precondition Required
429 Too Many Requests
431 Request Header Fields Too Large
444 Connection Closed Without Response
451 Unavailable For Legal Reasons
499 Client Closed Request
5×× Server Error
500 Internal Server Error
501 Not Implemented
502 Bad Gateway
503 Service Unavailable
504 Gateway Timeout
505 HTTP Version Not Supported
506 Variant Also Negotiates
507 Insufficient Storage
508 Loop Detected
510 Not Extended
511 Network Authentication Required
599 Network Connect Timeout Error
100 Continue
101 Switching Protocols
102 Processing
2×× Success
200 OK
201 Created
202 Accepted
203 Non-authoritative Information
204 No Content
205 Reset Content
206 Partial Content
207 Multi-Status
208 Already Reported
226 IM Used
3×× Redirection
300 Multiple Choices
301 Moved Permanently
302 Found
303 See Other
304 Not Modified
305 Use Proxy
307 Temporary Redirect
308 Permanent Redirect
4×× Client Error
400 Bad Request
401 Unauthorized
402 Payment Required
403 Forbidden
404 Not Found
405 Method Not Allowed
406 Not Acceptable
407 Proxy Authentication Required
408 Request Timeout
409 Conflict
410 Gone
411 Length Required
412 Precondition Failed
413 Payload Too Large
414 Request-URI Too Long
415 Unsupported Media Type
416 Requested Range Not Satisfiable
417 Expectation Failed
418 I'm a teapot
421 Misdirected Request
422 Unprocessable Entity
423 Locked
424 Failed Dependency
426 Upgrade Required
428 Precondition Required
429 Too Many Requests
431 Request Header Fields Too Large
444 Connection Closed Without Response
451 Unavailable For Legal Reasons
499 Client Closed Request
5×× Server Error
500 Internal Server Error
501 Not Implemented
502 Bad Gateway
503 Service Unavailable
504 Gateway Timeout
505 HTTP Version Not Supported
506 Variant Also Negotiates
507 Insufficient Storage
508 Loop Detected
510 Not Extended
511 Network Authentication Required
599 Network Connect Timeout Error
Friday, March 8, 2019
How to copy a ssh key to all couple nodes Using for loop.
How to copy ssh keys to all couple nodes Using for loop.
ssh-keygen
# for host in sukhoi.test.com \
sukhoi1.test.com \
sukhoi2.test.com; \
do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \ done
sukhoi2.test.com; \
do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \ done
Wednesday, February 20, 2019
How to run a simple web page using python.
Login to Linux box
1) Create a folder ODA
2) Copy the required content
3) run the following command from the folder, python -m SimpleHTTPServer 81 &
4) Use the ipaddress:81 port and check the same
Regards
Gurudatta N.R
1) Create a folder ODA
2) Copy the required content
3) run the following command from the folder, python -m SimpleHTTPServer 81 &
4) Use the ipaddress:81 port and check the same
Regards
Gurudatta N.R
Sunday, February 17, 2019
How to check the Mirror status in the Exadata Cell nodes.
for x in 1 2 5 6 7 8 11; do mdadm --detail /dev/md$x; done
Friday, January 11, 2019
How to replace the USB boot drive in X5-2L/6L in High Capacity configuration in Exadata.
Image version 12.1.2.3.3.161208 ,High Capacity configuration
1,Turn on the locate indicator light ‘on’ for easier identification of the server being repaired. Login to the CellCli:
CellCli> alter cell led on
2,Verify DISK_REPAIR_TIME attribute value
SQL> select dg.name,a.value from v$asm_attribute a, v$asm_diskgroup dg where a.name = 'disk_repair_time' and a.group_number = dg.group_number;
3,Check if ASM will be OK if the grid disks go OFFLINE.
# cellcli -e list griddisk attributes name,asmmodestatus,asmdeactivationoutcome
If one or more disks return asmdeactivationoutcome='No', then wait for some time Once all disks return asmdeactivationoutcome='Yes', proceed to the next step.
4, Run cellcli command to Inactivate all grid disks on the cell that needs to be powered down for maintenance. (this could take up to 10 minutes or longer)
# cellcli
...sample ...
CellCLI> ALTER GRIDDISK ALL INACTIVE
GridDisk D ATA_CD_00_hostname successfully altered GridDisk RECO_CD_02_dmorlx8cel01 successfully altered ...repeated for all griddisks...
5,Execute the command below and the output should show asmmodestatus='UNUSED' or 'OFFLINE' and asmdeactivationoutcome=Yes for all griddisks once the disks are offline and inactive in ASM.
CellCLI> list griddisk attributes name,status,asmmodestatus,asmdeactivationoutcome
DATA_CD_00_hostname inactive OFFLINE Yes RECO_CD_02_hostname inactive OFFLINE Yes ...repeated for all griddisks...
6,Once all disks are offline and inactive, the customer may shutdown the Cell using the following command:
# shutdown -hP now
============Field Engineer to replace USB drive=======
1. Remove and replace the USB thumb drive from the internal USB port.Make a note of which slot the USB drive is inserted into ,there are two USB slots.
CellCli> alter cell led on
2,Verify DISK_REPAIR_TIME attribute value
SQL> select dg.name,a.value from v$asm_attribute a, v$asm_diskgroup dg where a.name = 'disk_repair_time' and a.group_number = dg.group_number;
3,Check if ASM will be OK if the grid disks go OFFLINE.
# cellcli -e list griddisk attributes name,asmmodestatus,asmdeactivationoutcome
If one or more disks return asmdeactivationoutcome='No', then wait for some time Once all disks return asmdeactivationoutcome='Yes', proceed to the next step.
4, Run cellcli command to Inactivate all grid disks on the cell that needs to be powered down for maintenance. (this could take up to 10 minutes or longer)
# cellcli
...sample ...
CellCLI> ALTER GRIDDISK ALL INACTIVE
GridDisk D ATA_CD_00_hostname successfully altered GridDisk RECO_CD_02_dmorlx8cel01 successfully altered ...repeated for all griddisks...
5,Execute the command below and the output should show asmmodestatus='UNUSED' or 'OFFLINE' and asmdeactivationoutcome=Yes for all griddisks once the disks are offline and inactive in ASM.
CellCLI> list griddisk attributes name,status,asmmodestatus,asmdeactivationoutcome
DATA_CD_00_hostname inactive OFFLINE Yes RECO_CD_02_hostname inactive OFFLINE Yes ...repeated for all griddisks...
6,Once all disks are offline and inactive, the customer may shutdown the Cell using the following command:
# shutdown -hP now
============Field Engineer to replace USB drive=======
1. Remove and replace the USB thumb drive from the internal USB port.Make a note of which slot the USB drive is inserted into ,there are two USB slots.
On Exadata Storage Server based on a Oracle Server X5-2L, the internal USB ports are located near the
handle on the Rear I/O daughter board located between PCIe slots 3 and 4.
2. Replace the server’s top cover and re-attach the AC power cords. ILOM will take up to 2 minutes to boot.
3. Slide the server back into the rack.
4. After ILOM has booted, power on the server by pressing the power button, and then connect to the server’s console.
==========================================================
From the ILOM CLI:
→ start /SP/console
5. From the console and monitor the system booting. The server should boot from the primary hard disk. This will be mentioned in the Exadata splash screen.
6. After the Storage Server is booted, login as ‘root’ user.
7. Run the following to copy the recovery image and configuration data to the new USB stick:
# cd /opt/oracle.SupportTools
# ./make_cellboot_usb -verbose -force
8. Set the next boot to forcibly stop at the BIOS setup menu:
# ipmitool chassis bootdev bios
9. Reboot the server with the following command:
# shutdown -r now
10. Monitor the system booting again. The system should go automatically into the BIOS Setup screen.
11. Once the BIOS Setup screen is displayed on the console, use the arrow keys to navigate to the Boot screen .Check the "Legacy Boot Option Priority" list . Set the "USB:USBIN0:ORACLE SSM PMAP" to be the first boot device , followed by “PCI RAID Adapter” followed by the onboard network PXE devices. Press “Esc” to exit the ‘Boot Order Device Priority’ screen
12. Navigate to the Exit screen and select “Save Changes and Exit”
13. The server will boot . This time it should load the Exadata splash screen (grub) from the USB stick and indicate as such.
3. Slide the server back into the rack.
4. After ILOM has booted, power on the server by pressing the power button, and then connect to the server’s console.
==========================================================
From the ILOM CLI:
→ start /SP/console
5. From the console and monitor the system booting. The server should boot from the primary hard disk. This will be mentioned in the Exadata splash screen.
6. After the Storage Server is booted, login as ‘root’ user.
7. Run the following to copy the recovery image and configuration data to the new USB stick:
# cd /opt/oracle.SupportTools
# ./make_cellboot_usb -verbose -force
8. Set the next boot to forcibly stop at the BIOS setup menu:
# ipmitool chassis bootdev bios
9. Reboot the server with the following command:
# shutdown -r now
10. Monitor the system booting again. The system should go automatically into the BIOS Setup screen.
11. Once the BIOS Setup screen is displayed on the console, use the arrow keys to navigate to the Boot screen .Check the "Legacy Boot Option Priority" list . Set the "USB:USBIN0:ORACLE SSM PMAP" to be the first boot device , followed by “PCI RAID Adapter” followed by the onboard network PXE devices. Press “Esc” to exit the ‘Boot Order Device Priority’ screen
12. Navigate to the Exit screen and select “Save Changes and Exit”
13. The server will boot . This time it should load the Exadata splash screen (grub) from the USB stick and indicate as such.
Sunday, December 30, 2018
How to Install GUI In CentOS 7
1) yum groupinstall "GNOME Desktop" -Y
2) systemctl get-default
3) systemctl set-default graphical.target
4) systemctl get-default
5) systemctl isolate graphysical.target
Regards
Gurudatta N.R
Tuesday, December 25, 2018
How to install drupal on docker with swarm
How to install drupal on docker with swarm
1) login to docker ( manager node) and create a overlay network for mydrupal
docker network create --driver overlay mydrupal (Drupal for Web Content Management)
Note: Manager uses RAFT (Consensus Algorithm/Database data which consits of all the database )
API =======================> Accepts the command from the Clients and create the service object
Orchestrator =======================> Reconcillation loop for the service objects and creates the tasks
Allocater =======================> Allocates the Ipaddress to the tasks
Sheduler =======================> Assigns nodes to tasks
Dispatcher =======================> checks in on the works
Worker node
Worker ========================> connects to dispather to check on the assigned tasks
Executer ========================> Executes the tasks assigned to worker node.
[root@sukhoi /]# docker network ls
NETWORK ID NAME DRIVER SCOPE
fb31d601ef35 bridge bridge local
07501da8723e docker_gwbridge bridge local
a36af19fa85b host host local
tf6dmepnxekf ingress overlay swarm
wk6guya6sljb mydrupal overlay swarm ==============================> following network will be created.
37ca581e5d57 none null local
2) Create a service for psql ( psql) it will take couple of min to pull the image from the repositary .
docker service create --name psql --network mydrupal -e POSTGRES_PASSWORD=JimCarry postgres
3) Once the image pulled verify the same
[root@sukhoi /]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
8q1njxxbsea7 psql replicated 1/1 postgres:latest
[root@sukhoi/]# docker service ps psql
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
co8wisdvpdr1 psql.1 postgres:latest sukhoi Running Running 2 minutes ago
4) Create a service to use the drupal to use the network mydrupal. ( it may take couple of min)
docker service create --name drupal --network mydrupal -p 80:80 drupal
5) Issue the command to check the status of the services
docker service ps drupal
watch docker service ls
docker service inspect drupal =====================> Inspect
Once you install you can try to get in the GUI
open the browser and http://x.x.x.x:80 ==================> your server ip and ans ( key in the information)
How to setup a 3 node swarm cluster in OEL 7.6 with Kernel 3.10.0-957.el7.x86_64
| 1.0) Install Oel 7.2 |
| 2.0) Ensure your having having a valid repo configured in /etc/yum.repo/ |
| 2.1) yum update -y, Once the latest Kernel is updated reboot the node. |
| 2.2) yum install docker -y ========================> Install docker on all the 3 nodes |
| 3.0) If your behind the proxy you can configure the proxy @ /etc/systemd/system/docker.service.d/ |
| 3.1) Create a systemd drop-in directory for the docker service: |
| $ mkdir -p /etc/systemd/system/docker.service.d |
| Create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY and HTTPS_PROXY environment variable: |
| [Service] |
| Environment="HTTP_PROXY=http://x.x.x.x:80/" HTTPS_PROXY=http://x.x.s.x:80/" |
| 4.0) systemctl enable docker |
| 5.0) systemctl start docker |
| 6.0) Flush changes. |
| 7.0) systemctl daemon-reload |
| 7.1) systemctl show --property Environment docker |
| Environment=HTTP_PROXY=http://X.X.X.X:8080/ HTTPS_PROXY=http://Y.Y.Y.Y:8080/ |
| 7.2) systemctl status docker |
| docker.service - Docker Application Container Engine |
| Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) |
| Drop-In: /etc/systemd/system/docker.service.d |
| +-docker-sysconfig.conf, http-proxy.conf |
| Active: active (running) since Mon 2018-11-12 00:26:56 EST; 1h 20min ago |
| Docs: https://docs.docker.com |
| Main PID: 20249 (dockerd) |
| Memory: 73.5M |
| CGroup: /system.slice/docker.service |
| +-20249 /usr/bin/dockerd --selinux-enabled --storage-driver devicemapper --storage-opt dm.basesize=25G |
| +-20260 docker-containerd --config /var/run/docker/containerd/containerd.toml |
| Nov 12 00:59:08 sukhoi dockerd[20249]: time="2018-11-12T00:59:08.003626658-05:00" level=info msg="NetworkDB stats sukhoi (f512c7cd0519...tMsg/s:0" |
| Nov 12 01:04:08 sukhoi dockerd[20249]: time="2018-11-12T01:04:08.203598640-05:00" level=info msg="NetworkDB stats sukhoi (f512c7cd0519...tMsg/s:0" |
| Nov 12 01:09:08 sukhoi dockerd[20249]: time="2018-11-12T01:09:08.403655398-05:00" level=info msg="NetworkDB stats sukhoi (f512c7cd0519...tMsg/s:0" |
| Nov 12 01:14:08 sukhoi dockerd[20249]: time="2018-11-12T01:14:08.603535523-05:00" level=info msg="NetworkDB stats sukhoi (f512c7cd0519...tMsg/s:0" |
| Nov 12 01:19:08 sukhoi dockerd[20249]: time="2018-11-12T01:19:08.603644944-05:00" level=info msg="NetworkDB stats sukhoi (f512c7cd0519...tMsg/s:0" |
| Nov 12 01:24:08 sukhoi dockerd[20249]: time="2018-11-12T01:24:08.803625057-05:00" level=info msg="NetworkDB stats sukhoi (f512c7cd0519...tMsg/s:0" |
| Nov 12 01:29:09 sukhoi dockerd[20249]: time="2018-11-12T01:29:09.003692469-05:00" level=info msg="NetworkDB stats sukhoi (f512c7cd0519...tMsg/s:0" |
| Nov 12 01:34:09 sukhoi dockerd[20249]: time="2018-11-12T01:34:09.203640712-05:00" level=info msg="NetworkDB stats sukhoi (f512c7cd0519...tMsg/s:0" |
| Nov 12 01:39:09 sukhoi dockerd[20249]: time="2018-11-12T01:39:09.403656758-05:00" level=info msg="NetworkDB stats sukhoi (f512c7cd0519...tMsg/s:0" |
| Nov 12 01:44:09 sukhoi dockerd[20249]: time="2018-11-12T01:44:09.603580926-05:00" level=info msg="NetworkDB stats sukhoi (f512c7cd0519...tMsg/s:0" |
| Hint: Some lines were ellipsized, use -l to show in full. |
| 7.4 ) docker info, Once the docker is installed |
| Containers: 0 |
| Running: 0 |
| Paused: 0 |
| Stopped: 0 |
| Images: 0 |
| Server Version: 18.03.1-ol |
| Storage Driver: devicemapper |
| Pool Name: docker-8:2-4831-pool |
| Pool Blocksize: 65.54kB |
| Base Device Size: 26.84GB |
| Backing Filesystem: xfs |
| Udev Sync Supported: true |
| Data file: /dev/loop0 |
| Metadata file: /dev/loop1 |
| Data loop file: /var/lib/docker/devicemapper/devicemapper/data |
| Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata |
| Data Space Used: 14.42MB |
| Data Space Total: 107.4GB |
| Data Space Available: 107.4GB |
| Metadata Space Used: 581.6kB |
| Metadata Space Total: 2.147GB |
| Metadata Space Available: 2.147GB |
| Thin Pool Minimum Free Space: 10.74GB |
| Deferred Removal Enabled: true |
| Deferred Deletion Enabled: true |
| Deferred Deleted Device Count: 0 |
| Library Version: 1.02.149-RHEL7 (2018-07-20) |
| Logging Driver: json-file |
| Cgroup Driver: cgroupfs |
| Plugins: |
| Volume: local |
| Network: bridge host macvlan null overlay |
| Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog |
| Swarm: inactive |
| NodeID: zcbc4vv5757m20385cz7jqxpu |
| Is Manager: true |
| ClusterID: 1s6r7l37c7bzlvkh6ovthf4tr |
| Managers: 1 |
| Nodes: 1 |
| Orchestration: |
| Task History Retention Limit: 5 |
| Raft: |
| Snapshot Interval: 10000 |
| Number of Old Snapshots to Retain: 0 |
| Heartbeat Tick: 1 |
| Election Tick: 10 |
| Dispatcher: |
| Heartbeat Period: 5 seconds |
| CA Configuration: |
| Expiry Duration: 3 months |
| Force Rotate: 0 |
| Autolock Managers: false |
| Root Rotation In Progress: false |
| Node Address: x.x.x.x |
| Runtimes: runc |
| Default Runtime: runc |
| Init Binary: docker-init |
| containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88 |
| runc version: 4fc53a81fb7c994640722ac585fa9ca548971871 |
| init version: 949e6fa |
| Security Options: |
| seccomp |
| Profile: default |
| Kernel Version: 3.10.0-327.el7.x86_64 |
| Operating System: Oracle Linux Server 7.2 |
| OSType: linux |
| Architecture: x86_64 |
| CPUs: 24 |
| Total Memory: 125.7GiB |
| Name: sukhoi |
| ID: ZOOZ:24F2:FWUS:OBQ4:DBRU:5YCU:G364:EMQQ:EUZ7:2VGH:4L2W:DPJB |
| Docker Root Dir: /var/lib/docker |
| Debug Mode (client): false |
| Debug Mode (server): false |
| HTTP Proxy: http://x.x.x.x:80/ |
| HTTPS Proxy: http://x.x.x.x:80/" |
| Registry: https://index.docker.io/v1/ |
| Labels: |
| Experimental: false |
| Insecure Registries: |
| 127.0.0.0/8 |
| Live Restore Enabled: false |
| 8) Once we install the docker ( Default networks) |
| [root@x4270akash]# docker network ls |
| NETWORK ID NAME DRIVER SCOPE |
| 29d3ba90ff4c bridge bridge local |
| 8e955dd25905 host host local |
| 63ac0e5cf0e7 none null local |
| 9) Once the docker swarm init command issue it will create two networks overly/ingress/docker_gwbridge ( Following network are used for inter communication) |
| NETWORK ID NAME DRIVER SCOPE |
| fb31d601ef35 bridge bridge local |
| 07501da8723e docker_gwbridge bridge local =======================> |
| a36af19fa85b host host local |
| tf6dmepnxekf ingress overlay swarm =======================> |
| 37ca581e5d57 none null local |
| Swarm initialized: current node (dfmvr0p41u2xaroxju772ea8f) is now a manager. |
| 8) From the client you can add the worker to this swarm, run the following command: |
| docker swarm join --token SWMTKN-1-3pgpsq9agfxbsy2asvjew2p30afm8rdfw5szlh9zcb3c5u8aps-20dz136qydn4n05rkpfe592o0 x.x.x.x:2377 |
| 9) To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. |
| [root@sukhoi /]# docker node ls |
| ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION |
| v0c5jzt313g58jn5hhl5bxfl2 akash Ready Active 18.03.1-ol |
| dfmvr0p41u2xaroxju772ea8f * sukhoi Ready Active Leader 18.03.1-ol |
| sit20yl9qgftlc6r8vie62nax lca Ready Active 18.03.1-ol |
| Note: Deocker swarm commands only work from the manager. |
| [root@sukhoi /]# docker info From the manager node |
| Containers: 0 |
| Running: 0 |
| Paused: 0 |
| Stopped: 0 |
| Images: 0 |
| Server Version: 18.03.1-ol |
| Storage Driver: devicemapper |
| Pool Name: docker-8:2-2148748183-pool |
| Pool Blocksize: 65.54kB |
| Base Device Size: 26.84GB |
| Backing Filesystem: xfs |
| Udev Sync Supported: true |
| Data file: /dev/loop0 |
| Metadata file: /dev/loop1 |
| Data loop file: /var/lib/docker/devicemapper/devicemapper/data |
| Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata |
| Data Space Used: 14.35MB |
| Data Space Total: 107.4GB |
| Data Space Available: 107.4GB |
| Metadata Space Used: 17.36MB |
| Metadata Space Total: 2.147GB |
| Metadata Space Available: 2.13GB |
| Thin Pool Minimum Free Space: 10.74GB |
| Deferred Removal Enabled: true |
| Deferred Deletion Enabled: true |
| Deferred Deleted Device Count: 0 |
| Library Version: 1.02.149-RHEL7 (2018-07-20) |
| Logging Driver: json-file |
| Cgroup Driver: cgroupfs |
| Plugins: |
| Volume: local |
| Network: bridge host macvlan null overlay |
| Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog |
| Swarm: active =============================================> Swarm |
| NodeID: dfmvr0p41u2xaroxju772ea8f |
| Is Manager: true |
| ClusterID: q2lp94b3oe1a0q6ee3y8y8pv6 |
| Managers: 1 ===============================================> |
| Nodes: 4 ===============================================> |
| Orchestration: |
| Task History Retention Limit: 5 |
| Raft: |
| Snapshot Interval: 10000 |
| Number of Old Snapshots to Retain: 0 |
| Heartbeat Tick: 1 |
| Election Tick: 10 |
| Dispatcher: |
| Heartbeat Period: 5 seconds |
| CA Configuration: |
| Expiry Duration: 3 months |
| Force Rotate: 0 |
| Autolock Managers: false |
| Root Rotation In Progress: false |
| Node Address: x.x.x.x |
| Manager Addresses: |
| x.s.x.x:2377 |
| Runtimes: runc |
| Default Runtime: runc |
| Init Binary: docker-init |
| containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88 |
| runc version: 4fc53a81fb7c994640722ac585fa9ca548971871 |
| init version: 949e6fa |
| Security Options: |
| seccomp |
| Profile: default |
| Kernel Version: 3.10.0-957.el7.x86_64 |
| Operating System: Oracle Linux Server 7.6 |
| OSType: linux |
| Architecture: x86_64 |
| CPUs: 32 |
| Total Memory: 251.7GiB |
| Name: sukhoi |
| ID: DFC4:HDQE:RT3W:LZ7K:FZUT:Z7FD:I36D:IV7F:5U6C:QALR:ISZP:L2V2 |
| Docker Root Dir: /var/lib/docker |
| Debug Mode (client): false |
| Debug Mode (server): false |
| HTTP Proxy: http://x.x.x.x:80/ |
| HTTPS Proxy: http://x.x.x.x:80/" |
| Registry: https://index.docker.io/v1/ |
| Labels: |
| Experimental: false |
| Insecure Registries: |
| 127.0.0.0/8 |
| Regards |
| How to setup a 3 node swarm cluster in OEL 7.6 with Kernel 3.10.0-957.el7.x86_64 |
Tuesday, November 6, 2018
Tuesday, October 30, 2018
How to Find Exadata Rack Model (Eight/Quarter/Half ..)
From the DB nodes.
cd /opt/oracle.SupportTools/onecommand/
TRUE
cd /opt/oracle.SupportTools/onecommand/
grep -i MACHINETYPES databasemachine.xml
[root@sukhoi/]# grep -i MACHINETYPES
X4-2 Eighth Rack HP 1.2TB
From the cell nodes
[root@Sukhoi ~]# cellcli -e list cell attributes eighthrack
TRUE
Regards
Gurudatta N.R
Tuesday, October 16, 2018
How to delete a file which is throwing operation not permitted) from root acount.
Unable to delete file in Linux/Unix, operation not permitted) from root account.
chattr -i -a filename
chmod ugo+wfilename
rm filename
Regards
Gurudatta N.R
Thursday, October 4, 2018
How to import or clear a foreign configuration from the HBA configuration utility in Aspen
When prompted during the boot process press Ctrl+R to Run LSI Configuration Utility
Use CTRL+N to navigate to the VD MGMT screen, highlight the controller, press F2, Foreign config, Import/Clear.
Note: In some case with latest HBA firmware will not allow us to clear the Cache/import the cache. in those cases we have to pull thye disk from the Storage.
Regards
Gurudatta N.R
Sunday, September 2, 2018
How to Install and Configure VNC on Centos 6
In order to access the GUI of our linux servers
1) yum install tigervnc-server xterm
# vncpasswd
# vi /etc/sysconfig/vncservers
VNCSERVERS="2:root"
VNCSERVERARGS[2]="-geometry 1024x768"
# service vncserver start
# chkconfig vncserver on
Regards
Gurudatta N.R
Friday, July 6, 2018
How to list the users in Linux.
awk -F":" '{print "Login:" $1 "\tName:" $5 "\tHome:" $6}' /etc/passwd
Regards
Gurudatta N.R
Wednesday, July 4, 2018
How to install the Grub in Linux .
Boot the server using the Linux CD and Type "linux rescue" on the boot prompt.
Once you get the Shell Prompt, Now follow the steps:
# chroot /mnt/sysimage
Now issue the command "grub-install "
For Example:
# grub-install /dev/sda
Now the grub will be reinstalled in the primary hard disk.
Regards
Gurudatta N.R
How to Install Boot Block in Solaris 10 and solaris 11
After mounting the data set, install the boot block using installboot or installgrub
SPARC
# installboot -F zfs /mnt/usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0
x86 systems with Solaris 10 or Solaris 11.0
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
x86 systems with Solaris 11.1 and above
Use bootadm on x86 systems with Solaris 11.1 and above.The install-bootloader subcommand installs the system bootloader. It supersedes the functionality of installgrub on x86 as well as supporting installation of GRUB2’s bootloader on x86.
# bootadm install-bootloader -P rpool
Regards
Gurudatta N.R
Subscribe to:
Posts (Atom)