亚欧色一区w666天堂,色情一区二区三区免费看,少妇特黄A片一区二区三区,亚洲人成网站999久久久综合,国产av熟女一区二区三区

  • 發布文章
  • 消息中心
點贊
收藏
評論
分享
原創

hadoop升級步驟(2.10.1到2.10.2為例)

2023-01-18 02:45:21
109
0
    1. 參考//hadoop.apache.org/docs/r2.10.2/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html
    2. 軟件準備
      1. #新版本準備
        wget //archive.apache.org/dist/hadoop/common/hadoop-2.10.2/hadoop-2.10.2.tar.gz
        tar -zxvf hadoop-2.10.2.tar.gz -C ../program/
        cp ${HADOOP_HOME}/etc/hadoop/*-site.xml ${DIR}/hadoop-2.10.2/etc/hadoop/
        cp ${HADOOP_HOME}/etc/hadoop/slaves ${DIR}/hadoop-2.10.2/etc/hadoop/
        #拷貝到其他機器
        scp -r ${ip}:${DIR}/hadoop-2.10.2 ${DIR}/hadoop-2.10.2
    3. 準備滾動升級
      1. 運行“ hdfs dfsadmin -rollingUpgrade prepare ”來創建用于回滾的 fsimage。

    運行“ hdfs dfsadmin -rollingUpgrade query ”來檢查回滾映像的狀態。等待并重新運行該命令,直到顯示“ Proceed with rolling upgrade ”消息。

    1. 升級主備神經網絡
      1. 關閉并升級NN2。(如果DN和NN在一臺機子上面DN也要關閉,因為要改環境變量)
        ${HADOOP_HOME}/sbin/hadoop-daemon.sh stop namenode
      2. 使用“ -rollingUpgrade started ”選項將NN2作為備用啟動。
        $HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode -rollingUpgrade started
      3. 從NN1故障轉移到NN2 ,以便NN2變為活動狀態,NN1變為備用狀態。
        hdfs haadmin -failover nn2 nn1
      4. 關閉并升級NN1。
        ${HADOOP_HOME}/sbin/hadoop-daemon.sh stop namenode 
        使用“ -rollingUpgrade started ”選項將NN1作為備用啟動。
        $HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode -rollingUpgrade started
    2. 注意點
      1. 啟動前要切換HADOOP_HOME
      2. vim ~/.bash_profile
        export HADOOP_HOME=${DIR}/hadoop-2.10.1
        改成
        export HADOOP_HOME=${DIR}/hadoop-2.10.2
        source ~/.bash_profile
      3. 如果遇到no namenode to stop
        1. 修改HADOOP_PID_DIR配置
          1. mkdir -p ~/hadoop-data/pids
            vim ${HADOOP_HOME}/etc/hadoop/hadoop-env.sh,更改HADOOP_PID_DIR的值
            export HADOOP_PID_DIR=~/hadoop-data/pids
            vim ${HADOOP_HOME}/etc/hadoop/hdfs-site.xml
            dfs.ha.automatic-failover.enabled改成dfs.ha.automatic-failover.enabled.${nameservice的id}
          2. 用jps查看現有的hadoop進程,并且kill掉

    jps | grep   -E ' NameNode|NodeManager|DataNode|JobHistoryServer|Jps|JournalNode' | awk '{print $1}' | xargs kill

    jps | grep   -E ' NodeManager|JournalNode' | awk '{print $1}' | xargs kill

          1. 重新啟動原來的hadoop程序

    #Start the HDFS NameNode with the following command on the designated node as hdfs:
    $HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode -rollingUpgrade started

    #Start a HDFS DataNode with the following command on each designated node as hdfs:
    $HADOOP_HOME/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script hdfs start datanode

    #If etc/hadoop/slaves and ssh trusted access is configured (see Single Node Setup), all of the HDFS processes can be started with a utility script. As hdfs:
    # $HADOOP_HOME/sbin/start-dfs.sh

    #Start the YARN with the following command, run on the designated ResourceManager as yarn:
    $HADOOP_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager

    #Run a script to start a NodeManager on each designated host as yarn:
    $HADOOP_HOME/sbin/yarn-daemons.sh --config $HADOOP_CONF_DIR start nodemanager

    #Start a standalone WebAppProxy server. Run on the WebAppProxy server as yarn. If multiple servers are used with load balancing it should be run on each of them:
    $HADOOP_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start proxyserver

    #If etc/hadoop/slaves and ssh trusted access is configured (see Single Node Setup), all of the YARN processes can be started with a utility script. As yarn:
    # $HADOOP_HOME/sbin/start-yarn.sh

    #Start the MapReduce JobHistory Server with the following command, run on the designated server as mapred:
    $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver

    #查看集群狀態
    hdfs haadmin -getAllServiceState

      1. 如果兩個NN都是standby,就要強制設置主節點
        1. hdfs haadmin -transitionToActive --forcemanual nn1
          #${HADOOP_HOME}/bin/hdfs zkfc -formatZK
    1. 升級DN
      1. 選擇一小部分數據節點(例如特定機架下的所有數據節點)。
        1. 運行“ hdfs dfsadmin -shutdownDatanode <DATANODE_HOST:IPC_PORT> upgrade ”以關閉所選數據節點之一。
          運行“ hdfs dfsadmin -getDatanodeInfo <DATANODE_HOST:IPC_PORT> ”檢查并等待數據節點關閉。
          升級并重啟數據節點。
          $HADOOP_HOME/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script hdfs start datanode
          對子集中所有選定的數據節點并行執行上述步驟。
      2. 重復上述步驟,直到集群中的所有datanode都升級完畢。
    2. 完成滾動升級
      1. 運行“ hdfs dfsadmin -rollingUpgrade finalize ”以完成滾動升級。
    3. 啟動hive的元數據$HIVE_HOME/bin/hive --service metastore &

     

0條評論
0 / 1000
肖****睿
13文章數
0粉絲數
肖****睿
13 文章 | 0 粉絲
原創

hadoop升級步驟(2.10.1到2.10.2為例)

2023-01-18 02:45:21
109
0
    1. 參考//hadoop.apache.org/docs/r2.10.2/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html
    2. 軟件準備
      1. #新版本準備
        wget //archive.apache.org/dist/hadoop/common/hadoop-2.10.2/hadoop-2.10.2.tar.gz
        tar -zxvf hadoop-2.10.2.tar.gz -C ../program/
        cp ${HADOOP_HOME}/etc/hadoop/*-site.xml ${DIR}/hadoop-2.10.2/etc/hadoop/
        cp ${HADOOP_HOME}/etc/hadoop/slaves ${DIR}/hadoop-2.10.2/etc/hadoop/
        #拷貝到其他機器
        scp -r ${ip}:${DIR}/hadoop-2.10.2 ${DIR}/hadoop-2.10.2
    3. 準備滾動升級
      1. 運行“ hdfs dfsadmin -rollingUpgrade prepare ”來創建用于回滾的 fsimage。

    運行“ hdfs dfsadmin -rollingUpgrade query ”來檢查回滾映像的狀態。等待并重新運行該命令,直到顯示“ Proceed with rolling upgrade ”消息。

    1. 升級主備神經網絡
      1. 關閉并升級NN2。(如果DN和NN在一臺機子上面DN也要關閉,因為要改環境變量)
        ${HADOOP_HOME}/sbin/hadoop-daemon.sh stop namenode
      2. 使用“ -rollingUpgrade started ”選項將NN2作為備用啟動。
        $HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode -rollingUpgrade started
      3. 從NN1故障轉移到NN2 ,以便NN2變為活動狀態,NN1變為備用狀態。
        hdfs haadmin -failover nn2 nn1
      4. 關閉并升級NN1。
        ${HADOOP_HOME}/sbin/hadoop-daemon.sh stop namenode 
        使用“ -rollingUpgrade started ”選項將NN1作為備用啟動。
        $HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode -rollingUpgrade started
    2. 注意點
      1. 啟動前要切換HADOOP_HOME
      2. vim ~/.bash_profile
        export HADOOP_HOME=${DIR}/hadoop-2.10.1
        改成
        export HADOOP_HOME=${DIR}/hadoop-2.10.2
        source ~/.bash_profile
      3. 如果遇到no namenode to stop
        1. 修改HADOOP_PID_DIR配置
          1. mkdir -p ~/hadoop-data/pids
            vim ${HADOOP_HOME}/etc/hadoop/hadoop-env.sh,更改HADOOP_PID_DIR的值
            export HADOOP_PID_DIR=~/hadoop-data/pids
            vim ${HADOOP_HOME}/etc/hadoop/hdfs-site.xml
            dfs.ha.automatic-failover.enabled改成dfs.ha.automatic-failover.enabled.${nameservice的id}
          2. 用jps查看現有的hadoop進程,并且kill掉

    jps | grep   -E ' NameNode|NodeManager|DataNode|JobHistoryServer|Jps|JournalNode' | awk '{print $1}' | xargs kill

    jps | grep   -E ' NodeManager|JournalNode' | awk '{print $1}' | xargs kill

          1. 重新啟動原來的hadoop程序

    #Start the HDFS NameNode with the following command on the designated node as hdfs:
    $HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode -rollingUpgrade started

    #Start a HDFS DataNode with the following command on each designated node as hdfs:
    $HADOOP_HOME/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script hdfs start datanode

    #If etc/hadoop/slaves and ssh trusted access is configured (see Single Node Setup), all of the HDFS processes can be started with a utility script. As hdfs:
    # $HADOOP_HOME/sbin/start-dfs.sh

    #Start the YARN with the following command, run on the designated ResourceManager as yarn:
    $HADOOP_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager

    #Run a script to start a NodeManager on each designated host as yarn:
    $HADOOP_HOME/sbin/yarn-daemons.sh --config $HADOOP_CONF_DIR start nodemanager

    #Start a standalone WebAppProxy server. Run on the WebAppProxy server as yarn. If multiple servers are used with load balancing it should be run on each of them:
    $HADOOP_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start proxyserver

    #If etc/hadoop/slaves and ssh trusted access is configured (see Single Node Setup), all of the YARN processes can be started with a utility script. As yarn:
    # $HADOOP_HOME/sbin/start-yarn.sh

    #Start the MapReduce JobHistory Server with the following command, run on the designated server as mapred:
    $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver

    #查看集群狀態
    hdfs haadmin -getAllServiceState

      1. 如果兩個NN都是standby,就要強制設置主節點
        1. hdfs haadmin -transitionToActive --forcemanual nn1
          #${HADOOP_HOME}/bin/hdfs zkfc -formatZK
    1. 升級DN
      1. 選擇一小部分數據節點(例如特定機架下的所有數據節點)。
        1. 運行“ hdfs dfsadmin -shutdownDatanode <DATANODE_HOST:IPC_PORT> upgrade ”以關閉所選數據節點之一。
          運行“ hdfs dfsadmin -getDatanodeInfo <DATANODE_HOST:IPC_PORT> ”檢查并等待數據節點關閉。
          升級并重啟數據節點。
          $HADOOP_HOME/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script hdfs start datanode
          對子集中所有選定的數據節點并行執行上述步驟。
      2. 重復上述步驟,直到集群中的所有datanode都升級完畢。
    2. 完成滾動升級
      1. 運行“ hdfs dfsadmin -rollingUpgrade finalize ”以完成滾動升級。
    3. 啟動hive的元數據$HIVE_HOME/bin/hive --service metastore &

     

文章來自個人專欄
文章 | 訂閱
0條評論
0 / 1000
請輸入你的評論
0
0