HBase安装与编程实践



1.HBase安装

1.1 关于Hbase的安装

  • 在虚拟机中已经提前的安装了hadoop,hdfs等,在hadoop基础继续的。
  • hadoop 版本为: Hadoop 3.3.5 , 而 HBase版本为: HBase 2.6.1, 版本要匹配上。
  • jdk8,Hbase是在hadoop基础上延续的。

1.2 安装

  • 下载链接
  • 环境变量设置,cat ~/.bashrc 文件中,配置文件仅供参考
  • source ~/.bashrc 是配置文件生效
  • hbase version ,查看版本其版本,出现对应的版本则按照成功等
  • 关于节点的配置,伪分布式, hbase-site.xml 配置, 192.168.31.101 虚拟机的ip地址,并非填写localhost,不然就会出现下图中的错误。
 <configuration>
          <property>
                <name>hbase.rootdir</name>
                <value>hdfs://192.168.31.101:9000/hbase</value>
        </property>
        <property>
                <name>hbase.cluster.distributed</name>
                <value>true</value>
        </property>
        <property>
                <name>hbase.unsafe.stream.capability.enforce</name>
                <value>false</value>
        </property>
        <property>
           <name>hbase.zookeeper.quorum</name>
           <value>192.168.31.101:2181</value>
        </property>
        <property>
           <name>hbase.master.ipc.address</name>
           <value>0.0.0.0</value>
        </property>
        <property>
          <name>hbase.regionserver.ipc.address</name>
          <value>0.0.0.0</value>
        </property>
</configuration>
  • ./bin/start-hbase.sh 启动异常, 使用的是hbase里的zk,改为 192.168.31.101:2181 (上面配置中的)即解决了
  • ./bin/start-hbase.sh 启动,报服务异常。 把hbase,hdfs重启可解决,修改了其配置文件未生效。

2.HBase的命令

2.1 安装成功后的界面

2.2 成功启动后线程与端口号

  • jps 查看进程命令
  • netstat -nltp 查看其端口号

2.3 HBase Shell 命令

  • ./bin/hbase shell ,进入shell界面
  • 相关命令
create 'test', 'cf'  // 创建 数据表 test ,字段cf
list 'test' // 查看 数据表list
describe 'test' // 查看表test 信息
put 'test', 'row1', 'cf:a', 'value1'  // 插入数据
scan 'test' // 查看test全部数据
get 'test', 'row1' // 查看单个数据

disable 'test'   // 禁用表test 
enable 'test'    // 启用表test
drop 'test'   // 删除表test,删除表必须先禁用
  • 命令参考官网
  • 图中共有3个表,hbase 是不用建库,直接建表,是非关系型数据库

3.HBase的编程实践

3.1 SpringBoot 的pom

<parent>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-parent</artifactId>
  <version>3.2.0</version>
  <relativePath/>
</parent>


<properties>
  <maven.compiler.source>17</maven.compiler.source>
  <maven.compiler.target>17</maven.compiler.target>
  <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>


<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
  </dependency>

  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-aop</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
  </dependency>

  <dependency>
    <groupId>org.apache.hbase</groupId>
    <artifactId>hbase-client</artifactId>
    <version>2.6.1</version>
  </dependency>
  <dependency>
    <groupId>com.spring4all</groupId>
    <artifactId>spring-boot-starter-hbase</artifactId>
    <version>1.0.0.RELEASE</version>
  </dependency>

</dependencies>

3.2 业务层实现代码

  • 完整的代码可参考github
package com.coderpwh.service.impl;

import com.coderpwh.service.HbaseService;
import jakarta.annotation.Resource;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
import org.apache.hadoop.hbase.util.Bytes;
import org.springframework.stereotype.Service;


/**
 * @author coderpwh
 */
@Service
public class HbaseServiceImpl implements HbaseService {


    private Configuration configuration;


    private Connection connection;


    private Admin admin;


    /***
     * 创建数据库
     * @return
     */
    @Override
    public String createBase() {
        init();
        createTable("student", new String[]{"score"});
        insertData("student", "zhangsan", "score", "English", "69");
        insertData("student", "zhangsan", "score", "Math", "86");
        insertData("student", "zhangsan", "score", "Computer", "77");
        getData("student", "zhangsan", "score", "English");
        close();
        return "success";
    }


    /***
     * 初始化
     */
    public void init() {
        configuration = HBaseConfiguration.create();
        configuration.set("hbase.rootdir", "hdfs://192.168.31.101:9000/hbase");
        configuration.set("hbase.zookeeper.quorum", "192.168.31.101");
        configuration.set("hbase.zookeeper.property.clientPort", "2181");
        try {
            connection = ConnectionFactory.createConnection(configuration);
            admin = connection.getAdmin();
        } catch (Exception e) {
            e.printStackTrace();
        }
    }


    /***
     * 关闭连接
     */
    public void close() {
        try {
            if (admin != null) {
                admin.close();
            }
            if (connection != null) {
                connection.close();
            }
        } catch (Exception e) {
            e.printStackTrace();

        }
    }


    /***
     * 创建表
     * @param myTableName
     * @param colFamily
     */
    public void createTable(String myTableName, String[] colFamily) {
        try {
            TableName tableName = TableName.valueOf(myTableName);

            if (admin.tableExists(tableName)) {
                System.out.println("表已经存在");
            } else {
                TableDescriptorBuilder builder = TableDescriptorBuilder.newBuilder(tableName);
                for (String cf : colFamily) {
                    ColumnFamilyDescriptor family =
                    ColumnFamilyDescriptorBuilder.newBuilder(Bytes.toBytes(cf)).build();
                    builder.setColumnFamily(family);
                }
                admin.createTable(builder.build());
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }


    /***
     * 添加数据
     * @param tableName
     * @param rowKey
     * @param colFamily
     * @param col
     * @param val
     */

    public void insertData(String tableName, String
            rowKey, String colFamily, String col, String val) {
        try {
            Table table = connection.getTable(TableName.valueOf(tableName));
            Put put = new Put(rowKey.getBytes());
            put.addColumn(colFamily.getBytes(), col.getBytes(), val.getBytes());
            table.put(put);
            table.close();
        } catch (Exception e) {
            e.printStackTrace();
        }
    }


    /***
     * 查询数据
     * @param tableName
     * @param rowKey
     * @param colFamily
     * @param col
     */
    public void getData(String tableName, String rowKey, String
            colFamily, String col) {
        try {
            Table table = connection.getTable(TableName.valueOf(tableName));
            Get get = new Get(rowKey.getBytes());
            get.addColumn(colFamily.getBytes(), col.getBytes());
            Result result = table.get(get);
            System.out.println(new
                    String(result.getValue(colFamily.getBytes(), col == null ? null : col.getBytes())));
            table.close();
        } catch (Exception e) {
            e.printStackTrace();
        }

    }


}

3.3 运行结果

  • 控制台的打印,实现了HBase初始化,数据插入,查询数据,关闭HBase等逻辑
  • ./bin/hbase shell 到shell里查看student
  • 通过用Java客户端进行代码插入与查询数据到HBase已经成功完成

文章作者: coderpwh
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 coderpwh !
  目录