伪分布式数据库搭建(hadoop+spark+scala)
一,安装hadoop
1.安装java环境
一,下载JDK安装包
官网:https://www.oracle.com/java /technologies /javase-jdk8-downloads.html
二,卸载openJDK
命令:rpm -qa | grep java
java --version

三,安装JDK
命令:tar -zxvf jdk-8u152-linux-x64.tar.gz -C /usr/local/src/
ls /usr/local/src/
四,设置java环境变量
命令:vi /etc/profile
在文件的最后增加如下两行:
export JAVA_HOME=/usr/local/src/jdk1.8.0_152
export PATH=$PATH:$JAVA_HOME/bin
source /etc/profile
java -version

2,实现免密登录
一,创建ssh秘钥
命令: ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa

二,将master公钥进行公钥认证,实现本机免密登陆
命令:ssh-copy-id -i /root/.ssh/id_dsa.pub master
ssh master

exit
3,hadoop环境的安装与配置
一,将/root/hadoop-2.7.1.tar.gz解压到/opt目录下,并将解压文件改名为hadoop
命令:tar -zxvf /root/hadoop-2.7.1.tar.gz -C /opt
cd /opt
mv hadoop-2.7.1 hadoop
二,修改环境变量
命令:vim /etc/profile
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source /etc/profile

三,编辑/opt/hadoop/etc/hadoop/hadoop-env.sh文件
命令: vi /opt/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/local/src/jdk1.8.0_152

四,编辑/opt/hadoop/etc/hadoop/core-site.xml文件
命令: vi /opt/hadoop/etc/hadoop/core-site.xml
修改配置文件为:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/tmp</value>
</property>
</configuration>

五,编辑/opt/hadoop/etc/hadoop/hdfs-site.xml文件
命令: vi /opt/hadoop/etc/hadoop/hdfs-site.xml
修改配置文件:
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>

六,复制/opt/hadoop/etc/hadoop/mapred-site.xml.tmplate 名为mapred-site.xml
命令:cp /opt/hadoop/etc/hadoop/mapred-site.xml.template /opt/hadoop/etc/hadoop/mapred-site.xml
编辑/opt/hadoop/etc/hadoop/mapred-site.xml文件
命令:vi /opt/hadoop/etc/hadoop/mapred-site.xml
修改配置为:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

七,编辑/opt/hadoop/etc/hadoop/yarn-site.xml文件
命令:vi /opt/hadoop/etc/hadoop/yarn-site.xml
修改配置为:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

八,编辑/opt/hadoop/etc/hadoop/slaves文件
命令:vim /opt/hadoop/etc/hadoop/slaves
首行添加:master

九, 格式化hdfs
命令: hdfs namenode -format
十,启动集群 jps查看,登录网页
命令: start-all.sh
jps

浏览器输入http://master:50070,查看信息

二,安装spark
1,解压spark安装包
命令:tar -zxf spark-3.2.1-bin-hadoop2.7.tgz -C /usr/local/
2,复制和重命名后得到spark-env.sh,打开spark-env.sh添加内容
命令:cd /usr/local/spark-3.2.1-bin-hadoop2.7/conf/
cp spark-env.sh.template spark-env.sh
vi spark-env.sh
export JAVA_HOME=/usr/local/src/jdk1.8.0_152
export HADOOP_HOME=/opt/hadoop
export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
export SPARK_MASTER_IP=master
export SPART_LOCAL_IP=master

3,进入spark目录的/sbin下启动spark集群,jps查看
命令:cd /usr/local/spark-3.2.1-bin-hadoop2.7/sbin/
./start-all.sh
jps

4,启动spark-shell
命令: cd /usr/local/spark-2.0.0-bin-hadoop2.6/
./bin/spark-shell
5,查看网页
http://master:8080

三,安装scala
1,解压scala安装包
命令:tar -zxf scala-2.11.8.tgz -C /usr/local
2,配置scala环境变量,重新加载配置文件,运行scala
命令:vim /etc/profile
export SCALA_HOME=/usr/local/scala-2.11.8
export PATH=$PATH:$SCALA_HOME/bin

source /etc/profile
scala
