Thursday, 24 June 2021

Hbase errors issues and solutions

1)ERROR: KeeperErrorCode = NoNode for /hbase/master 

check

hbase(main):001:0> list

TABLE

ERROR: KeeperErrorCode = NoNode for /hbase/master

For usage try 'help "list"'

Took 8.2629 seconds

hbase(main):002:0> list

TABLE

ERROR: Call id=15, waitTime=60008, rpcTimeout=60000

For usage try 'help "list"'

Took 488.5749 seconds

 Dead regions server

hbase(main):002:0> status

1 active master, 2 backup masters, x servers, x dead, 0.0000 average load

Took 0.0711 seconds

HMASTER UI SHOWING DEAD REGION SERVER


hbase:meta,,1 is not online on 

Solution

In progress

How to Delete a directory from Hadoop cluster which is having commas in its name?

># hdfs dfs -rm -r  /hbase/WALs/wrker-02.xyz.com,16020,1623662453275-splitting

Picked up _JAVA_OPTIONS: -Djava.io.tmpdir=/hadoop/tmp

{"type":"log","host":"host_name","category":"YARN-yarn-GATEWAY-BASE","level":"WARN","system":"n05dlkcluster","time": "21/06/23 06:13:57","logger":"util.NativeCodeLoader","timezone":"UTC","log":{"message":"Unable to load native-hadoop library for your platform... using builtin-java classes where applicable"}}

Deleted /hbase/WALs/wrker-02.xyz.com,16020,1623662453275-splitting

How to clear Dead Region Servers in HBase UI?

Microsoft’s Windows 11 Feature Dowload Anroid apps Amazon app store

Microsoft’s Windows 11 Launched....
Android apps coming to Windows 11 as well 

The next version of Windows 11 is here with a complete design overhaul.

 Teams integration is being added to Windows 11 rings Fresh Interface, Centrally-Placed Start Menu called the “next generation” of Windows, comes with a massive redesign over its predecessor, starting from an all-new boot screen and startup sound to a centrally-placed Start menu and upgraded widgets.

 Windows 11 also removes elements including the annoying “Hi Cortana” welcome screen and Live Tiles

Windows 11 is a major release of the Windows NT operating system, announced on June 24, 2021, and developed by Microsoft.

Developer Microsoft
Written in
C, C++, C#, assembly language
OS family Microsoft Windows
Source model
Closed-source
Source-available (through Shared Source Initiative)
Some components open source[1][2][3][4]
Marketing target Personal computing
Available in 110 languages[5][6]
List of languages
Update method
Windows Update
Microsoft Store
Windows Server Update Services (WSUS)
Platforms x86-64, ARM64
Kernel type Hybrid (Windows NT kernel)
Userland Windows API
.NET Framework
Universal Windows Platform
Windows Subsystem for Linux
Android
Default user interface Windows shell (graphical)
Preceded by Windows 10 (2015)
Official website windows.com

Wednesday, 23 June 2021

hbase commands cheat sheet

hbase commands cheat sheet based on Groupds

 COMMAND GROUPS

  Group name: general

  Commands: processlist, status, table_help, version, whoami

  Group name: ddl

  Commands: alter, alter_async, alter_status, clone_table_schema, create, describe, disable, disable_all, drop, drop_all, enable, enable_all, exists, get_table, is_disabled, is_enabled, list, list_regions, locate_region, show_filters

  Group name: namespace

  Commands: alter_namespace, create_namespace, describe_namespace, drop_namespace, list_namespace, list_namespace_tables

  Group name: dml

  Commands: append, count, delete, deleteall, get, get_counter, get_splits, incr, put, scan, truncate, truncate_preserve

  Group name: tools

  Commands: assign, balance_switch, balancer, balancer_enabled, catalogjanitor_enabled, catalogjanitor_run, catalogjanitor_switch, cleaner_chore_enabled, cleaner_chore_run, cleaner_chore_switch, clear_block_cache, clear_compaction_queues, clear_deadservers, close_region, compact, compact_rs, compaction_state, flush, is_in_maintenance_mode, list_deadservers, major_compact, merge_region, move, normalize, normalizer_enabled, normalizer_switch, split, splitormerge_enabled, splitormerge_switch, stop_master, stop_regionserver, trace, unassign, wal_roll, zk_dump

  Group name: replication

  Commands: add_peer, append_peer_exclude_namespaces, append_peer_exclude_tableCFs, append_peer_namespaces, append_peer_tableCFs, disable_peer, disable_table_replication, enable_peer, enable_table_replication, get_peer_config, list_peer_configs, list_peers, list_replicated_tables, remove_peer, remove_peer_exclude_namespaces, remove_peer_exclude_tableCFs, remove_peer_namespaces, remove_peer_tableCFs, set_peer_bandwidth, set_peer_exclude_namespaces, set_peer_exclude_tableCFs, set_peer_namespaces, set_peer_replicate_all, set_peer_serial, set_peer_tableCFs, show_peer_tableCFs, update_peer_config

  Group name: snapshots

  Commands: clone_snapshot, delete_all_snapshot, delete_snapshot, delete_table_snapshots, list_snapshots, list_table_snapshots, restore_snapshot, snapshot

  Group name: configuration

  Commands: update_all_config, update_config

  Group name: quotas

  Commands: list_quota_snapshots, list_quota_table_sizes, list_quotas, list_snapshot_sizes, set_quota

  Group name: security

  Commands: grant, list_security_capabilities, revoke, user_permission

  Group name: procedures

  Commands: list_locks, list_procedures

  Group name: visibility labels

  Commands: add_labels, clear_auths, get_auths, list_labels, set_auths, set_visibility

  Group name: rsgroup

  Commands: add_rsgroup, balance_rsgroup, get_rsgroup, get_server_rsgroup, get_table_rsgroup, list_rsgroups, move_namespaces_rsgroup, move_servers_namespaces_rsgroup, move_servers_rsgroup, move_servers_tables_rsgroup, move_tables_rsgroup, remove_rsgroup, remove_servers_rsgroup


Saturday, 19 June 2021

Hbase quickly count number of rows

There are two ways  to get count of rows from hbase table with Speed

Scenario #1

If hbase table size is small then login to hbase shell with valid user and execute

hbase shell
>count '<tablename>'

Example

>count 'employee'

6 row(s) in 0.1110 seconds
Use RowCounter in HBase RowCounter is in build mapreduce job to count all the rows of a table. This is a good utility to use as a sanity check to ensure that HBase can read all the blocks of a table if there are any concerns of metadata inconsistency. It will run the mapreduce all in a single process but it will run faster if you have a MapReduce cluster in place for it to exploit.Its very helpfull when hbase table have huge data stored

Scenario #2

If hbase table size is large,then execute inbuilt RowCounter map reduce job: Login to hadoop machine with valid user and execute:

/$HBASE_HOME/bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter '<tablename>'

Example:

 /$HBASE_HOME/bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter 'employee'

     ....
     ....
     ....
     Virtual memory (bytes) snapshot=22594633728
                Total committed heap usage (bytes)=5093457920
        org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterMapper$Counters
                ROWS=6
        File Input Format Counters
                Bytes Read=0
        File Output Format Counters
                Bytes Written=0

Saturday, 12 June 2021

hadoop commands cheat sheet

 Hadoop  commands cheat sheet | HDFS commands cheat sheet

There are many more commands in "$HADOOP_HOME/bin/hadoop fs" than are demonstrated here, use hadoop or hdfs for the commands

hadoop fs -ls <path> list files in the path of the file system

hadoop fs -chmod <arg> <file-or-dir> alters the permissions of a file where <arg> is the binary argument e.g. 777

hadoop fs -chown <owner>:<group> <file-or-dir> change the owner of a file

hadoop fs -mkdir <path> make a directory on the file system

hadoop fs -put <local-origin> <destination> copy a file from the local storage onto file system

hadoop fs -get <origin> <local-destination> copy a file to the local storage from the file system

hadoop fs -copyFromLocal <local-origin> <destination> similar to the put command but the source is restricted to a local file reference

hadoop fs -copyToLocal <origin> <local-destination> similar to the get command but the destination is restricted to a local file reference

hadoop fs -touchz create an empty file on the file system

hadoop fs -cat <file> copy files to stdout

-------------------------------------------------------------------------

"<path>" means any file or directory name. 

"<path>..." means one or more file or directory names. 

"<file>" means any filename. 

----------------------------------------------------------------


-ls <path>

Lists the contents of the directory specified by path, showing the names, permissions, owner, size and modification date for each entry.

-lsr <path>

Behaves like -ls, but recursively displays entries in all subdirectories of path.

-du <path>

Shows disk usage, in bytes, for all the files which match path; filenames are reported with the full HDFS protocol prefix.

-dus <path>

Like -du, but prints a summary of disk usage of all files/directories in the path.

-mv <src><dest>

Moves the file or directory indicated by src to dest, within HDFS.

-cp <src> <dest>

Copies the file or directory identified by src to dest, within HDFS.

-rm <path>

Removes the file or empty directory identified by path.

-rmr <path>

Removes the file or directory identified by path. Recursively deletes any child entries (i.e., files or subdirectories of path).

-put <localSrc> <dest>

Copies the file or directory from the local file system identified by localSrc to dest within the DFS.

-copyFromLocal <localSrc> <dest>

-moveFromLocal <localSrc> <dest>

Copies the file or directory from the local file system identified by localSrc to dest within HDFS, and then deletes the local copy on success.

-cat <filen-ame>

Displays the contents of filename on stdout.

-mkdir <path>

Creates a directory named path in HDFS.

Creates any parent directories in path that are missing (e.g., mkdir -p in Linux).

-setrep [-R] [-w] rep <path>

Sets the target replication factor for files identified by path to rep. (The actual replication factor will move toward the target over time)

-touchz <path>

Creates a file at path containing the current time as a timestamp. Fails if a file already exists at path, unless the file is already size 0.

-test -[ezd] <path>

Returns 1 if path exists; has zero length; or is a directory or 0 otherwise.

-stat [format] <path>

Prints information about path. Format is a string which accepts file size in blocks (%b), filename (%n), block size (%o), replication (%r), and modification date (%y, %Y).

-chmod [-R] mode,mode,... <path>...

Changes the file permissions associated with one or more objects identified by path.... Performs changes recursively with R. mode is a 3-digit octal mode, or {augo}+/-{rwxX}. Assumes if no scope is specified and does not apply an umask.

-chown [-R] [owner][:[group]] <path>...

Sets the owning user and/or group for files or directories identified by path.... Sets owner recursively if -R is specified.

-chgrp [-R] group <path>...

Sets the owning group for files or directories identified by path.... Sets group recursively if -R is specified.


LIST FILES

hdfs dfs -ls / ==>>List all the files/directories for the given hdfs destination path.

hdfs dfs -ls -d /hadoop ==> Directories are listed as plain files. In this case, this command will list

the details of hadoop folder.

hdfs dfs -ls -h /data ==>Format file sizes in a human-readable fashion (eg 64.0m instead of

67108864).

hdfs dfs -ls -R /hadoop ==>Recursively list all files in hadoop directory and all subdirectories in

hadoop directory.

hdfs dfs -ls /hadoop/dat* ==>List all the files matching the pattern. In this case, it will list all the

files inside hadoop directory which starts with 'dat'.

OWNERSHIP

hdfs dfs -checksum /hadoop/file1  ==>Dump checksum information for files that match the file pattern <src>

to stdout.

hdfs dfs -chmod 755 /hadoop/file1  ==> Changes permissions of the file.

hdfs dfs -chmod -R 755 /hadoop  ==> Changes permissions of the files recursively.

hdfs dfs -chown myuser:mygroup /hadoop  ==> Changes owner of the file. 1st ubuntu in the command is owner and

2nd one is group.

hdfs dfs -chown -R hadoop:hadoop /hadoop ==> Changes owner of the files recursively.

hdfs dfs -chgrp ubuntu /hadoop ==> Changes group association of the file.

hdfs dfs -chgrp -R ubuntu /hadoop ==> Changes group association of the files recursively.