1、line 140: ./zookeeper.out: Permission denied
vim ~/.bashrc
export ZOO_LOG_DIR=/usr/local/zookeeper/log
source ~/.bashrc
2、zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 4 attempts
sudo ./start-hbase.sh
3、PYCURL ERROR 6 – “Couldn’t resolve host ‘mirrorlist.centos.org’”
这是因为NDS配置错误 vi /etc/sysconfig/network-scripts/ifcfg-* 添加DNS
4、spark-shell –master yarn-client
错ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
spark.dynamicAllocation.enabled=true
错ERROR cluster.YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED!
vi yarn-site.xml
yarn.nodemanager.pmem-check-enabled
false
yarn.nodemanager.vmem-check-enabled
false
错INFO yarn.Client: Application report for application_1498569053779_0001 (state: ACCEPTED)
vi yarn-site.xml
yarn.nodemanager.resource.memory-mb
8192
vi capacity-scheduler.xml
yarn.scheduler.capacity.maximum-am-resource-percent的值从0.1改成0.5
5、javax.persistence.PersistenceException:org.hibernate.exception.GenericJDBCException: Could not open connection
at AbstractEntityManagerImpl.java line 1387
数据库连接失败
6、启动spark on yarn是不需要启动master和worker的
7、cannot access /usr/local/spark/lib/spark-assembly-*.jar: No such file or directory
vi bin/hive
8、启动hive元数据service hive-metastore start 错hive-metastore: unrecognized service
这只适合使用软件包安装使用,tarball安装的软件使用hive –service metastore &
9、Hive error could not create serversocket on address 0.0.0.0/0.0.0.0:9083
启动了多个hive元数据,ps -ef |grep hive找到进程杀死
10、启动zookeeper 错:ERROR org.apache.zookeeper.server.quorum.QuorumPeerMain: Unexpected exception, exiting abnormally
java.net.BindException: Address already in use
ps -ef |grep zookeeper发现zookeeper在dass中使用,查看netstat -nlp |grep 2181发现端口被占用,修改matrix-hadoop-8的zookeeper端口为2182,重启成功
11、安装过程一直卡在”正在获取安装锁”
rm -rf /tmp/scm_prepare_node.*
rm -rf /tmp/.scm_prepare_node.lock
yum clean all
12、查看hdfs丢失的块
hdfs fsck -list-corruptfileblocks
找到文件直接删除(回收站中的也要删除)
13、升级CDH版本 错could not contact scm server at bogon:7182, giving up waiting for rollback request
这是因为DNS不能解析,修改DNS为8.8.8.8 vim /etc/sysconfig/network-scripts/ifcfg-eth0然后service network restart
14、配置yum源
服务端:
1)修改http的文档路径:sudo vim /etc/httpd/conf/httpd.conf 新增内容: DocumentRoot “/var/www/html”
2)启动http服务:service httpd start
3)上传文件到此目录下:mkdir -p /var/www/html/yum_cloudera
4)安装createrepo:yum -y install createrepo
5)初始化createrep:createrepo -pdo /var/www/html/yum_cloudera /var/www/html/yum_cloudera
6)添加新的yum包:yumdownloader 包名(只下载不安装) createrepo –update /var/www/html/yum_cloudera/ 更新一下
7)配置yum安装软件不删除软件包:vim /etc/yum.conf 将keepcache设为1
客户端:
cd /etc/yum.repos.d vim cloudera-self.repo 新增内容:
[CM]
name=Cloudera Manager
baseurl=http://172.20.208.3:80/yum_cloudera/
enable=1
gpgcheck=0
下载的时候可以指定repo下载:sudo yum -y –enablerepo=CM upgrade cloudera-manager-daemons cloudera-manager-agent cloudera-manager-server
15、java.lang.NoSuchMethodError: org.apache.spark.sql.hive.HiveContext.sql
spark版本要与服务器端版本一致
16、Retrying HMSHandler after 2000 ms (attempt 1 of 10) with error: javax.jdo.JDOUserException: Could not create “increment”/”table” value-generation container since autoCreate flags do not allow it
答:hive没有创建数据库权限并且检查hive的权限是否冲突,读取user表权限时,优先以具体host排序,在具有相同host的情况下,优先以具体的user排序,所以%权限低于指定host权限,空用户低于指定用户权限。
GRANT ALL PRIVILEGES ON . TO ‘hive’@’%’ IDENTIFIED BY ‘hive123#’ WITH GRANT OPTION;
17、安装配置postgresql
1)export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
locale-gen en_US.UTF-8
dpkg-reconfigure locales
2)sudo yum install postgresql-server
3)sudo vim /var/lib/pgsql/data/postgresql.conf 内容:listen_addresses = ‘*’ port = 5432
4)sudo vim /var/lib/pgsql/data/pg_hba.conf 内容:host all all 172.20.1.1/16 trust
5)sudo service postgresql start
6)sudo /sbin/chkconfig postgresql on 查看是否启动成功sudo /sbin/chkconfig –list postgresql
7)连接sudo -u postgres psql
8)创建角色sqoop:
CREATE ROLE sqoop LOGIN ENCRYPTED PASSWORD ‘sqoop123#’ NOSUPERUSER INHERIT CREATEDB NOCREATEROLE;
9)创建数据库:
CREATE DATABASE “sqoop” WITH OWNER = sqoop
ENCODING = ‘UTF8’
TABLESPACE = pg_default
LC_COLLATE = ‘en_US.UTF-8’
LC_CTYPE = ‘en_US.UTF-8’
CONNECTION LIMIT = -1;
18、cloudera-scm-server dead but pid file exists
答:rm /var/run/cloudera-scm-server.pid
19、修改CDH的数据库:vim /etc/cloudera-scm-server/db.properties
替换内容
DNS1=172.18.1.4
sed -n ‘2p’ file.txt //打印第二行
sed -n ‘1,4p’ file.txt //print 1-4 line
sed -n ‘1,$p’ file.txt //print all
phoenix教程 zeppelin
java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
export JAVA_HOME=/usr/java/jdk1.8.0_121
export CLASSPATH=.:JAVAHOME/lib:JAVA_HOME/jre/lib
创建表
hive>create table wyp (id int, name string, age int, tel string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘t’ LINES TERMINATED BY ‘n’ STORED AS TEXTFILE;
FIELDS TERMINATED BY ‘t’:表示字段之间用tab分割,
LINES TERMINATED BY ‘n’:表示行之间用回车分割
STORED AS TEXTFILE:表示储存为txt格式
20、常见问题汇总
oracle jdbc 放到sqoop的lib下面
使用导入导出脚本不能将自增导入,需要将所有主键id为int类型设置为自增
sudo -u hdfs hdfs dfs -chmod 777 /user/history
21、
声明:本站部分文章及图片源自用户投稿,如本站任何资料有侵权请您尽早请联系jinwei@zod.com.cn进行处理,非常感谢!