start infinidb for hadoop err

13 posts / 0 new
Last post
gongcheng911
gongcheng911's picture
Offline
Last seen: 3 weeks 40 min ago
Joined: May 21 2014
Junior Boarder

Posts: 13

qingsen zhou
start infinidb for hadoop err

HI,

  i use infinidb4.5 on a hadoop cluter, but fail on starting the system.

it seems like something about socket.

here is the log in debug.log when i start the infinidb

 

Jul 20 11:46:15 hadoop-slave01 oamcpp[25257]: 15.537144 |0|0|0| E 08 CAL0000: getSystemStatus: write exception: InetStreamSocket::connect: connect() error: Connection refused to: InetStreamSocket: sd: 4 inet: 192.168.1.64 port: 8604         
Jul 20 11:46:15 hadoop-slave01 oamcpp[25257]: 15.537269 |0|0|0| E 08 CAL0000: getSystemStatus: MessageQueueClient exception: API Failure return in getSystemStatus API
 
is it because i failed on the start, so getSystemStatus can't connect?
if this is the true ,how can i know about the reason about the err on start?
 
when i startted infinidb ,i got the word "   System being started, please wait...ProcMgr not responding while waiting for system to start
**** startSystem Failed : check log files"
 
i don't know what to do.
any advice would be great for me now!
gongcheng911
gongcheng911's picture
Offline
Last seen: 3 weeks 40 min ago
Joined: May 21 2014
Junior Boarder

Posts: 13

qingsen zhou
when i try to start system
when i try to start system again, i got the message like this:
 
[root@hadoop-slave01 ~]# cc startsystem
startsystem   Sun Jul 20 13:48:07 2014
startSystem command, 'infinidb' service is down, sending command to
start the 'infinidb' service on all modules
 
 
   System being started, please wait...ProcMgr not responding while waiting for system to start
**** startSystem Failed : check log files
 
 
but this time there is nothing in the err.log
where can I get the information about the err
gongcheng911
gongcheng911's picture
Offline
Last seen: 3 weeks 40 min ago
Joined: May 21 2014
Junior Boarder

Posts: 13

qingsen zhou
this time i give up run it on

this time i give up run it on hadoop

uninstall all the things on the cluster.

 

then I install infinidb on internal storage mode,

select combined mode (each node) has a um and a pm.

 

after that I finish my job, 

InfiniDB Install Successfully Completed.

 

I have to fix it work on hadoop next time,

for there is not enough time to try now.

 

And I also find that it's too hard to download the new version infinidb-4.6.0-1.x86_64.rpm.tar.gz

It just doesn't seem to finishe when it got 99.5% .

maybe the source link has some problem.

 

davidhill
davidhill's picture
Offline
Last seen: 1 month 2 weeks ago
Joined: Oct 27 2009
Administrator

Posts: 595

david hill
hadoop install

Have you installed pdsh package on all servers and setup the /etc/pdsh/machines file on all servers as documented?

 

When starting the system on a hadop install, InfiniDB uses these ultilies. They aren't used or required on a non-hadoop system.

 

Here is some dependecy info for InfiniDB on a Hadoop system:

 

Package dependencies:

 expect
 libgenders-1.14-2.el6.rf.x86_64.rpm
 pdsh-2.27-1.el6.rf.x86_64.rpm

Make sure the /etc/pdsh/machines is setup on all servers. It should contain the host name for all servers on the system. Example:

  # vi /etc/pdsh/machines
  
  srvperf7.calpont.com
  srvperf5.calpont.com
  
  # pdcp -a /etc/pdsh/machines /etc/pdsh/

SSH keys need to be setup between all servers to use the pdsh commands. The Amazon setup will setup the Amazon instances for you, for non-amazon systems, do the following on each server:

 ssh-keygen -t dsa
 scp ~/.ssh/id_dsa.pub SERVERXXX:.ssh/authorized_keys2
 
 Then on each server, run this to setup login to its own local server
 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys2
 
 Make sure you log into each server to where you can get a direct connection, like so:
 
 [root@srvperf5 ~]# ssh srvperf7.calpont.com
   Last login: Tue Sep 10 09:44:34 2013 from srvperf5.calpont.com
   
 Run this command to test out the pdsh functionality:
 
 pdsh -a '/etc/init.d/infinidb status'

 

David 

gongcheng911
gongcheng911's picture
Offline
Last seen: 3 weeks 40 min ago
Joined: May 21 2014
Junior Boarder

Posts: 13

qingsen zhou
Thanks for lot!

Thanks for lot!

I havn't installed pdsh package.

I will try again by your advice.

 

gongcheng911
gongcheng911's picture
Offline
Last seen: 3 weeks 40 min ago
Joined: May 21 2014
Junior Boarder

Posts: 13

qingsen zhou
this time I have install pdsh
this time I have install pdsh
 
when I install infinidb, I got the message below 
 
[root@hadoop-slave01 ~]# rpm -ivh infinidb-libs-4.6.0-1.x86_64.rpm infinidb-platform-4.6.0-1.x86_64.rpm
Preparing...                ########################################### [100%]
   1:infinidb-libs          ########################################### [ 50%]
InfinIDB RPM install completed
   2:infinidb-platform      ########################################### [100%]
/usr/local/Calpont/bin/setenv-hdfs-20: line 70: [: /usr/lib/hadoop/libexec/hadoop-config.sh: binary operator expected
/usr/local/Calpont/bin/setenv-hdfs-12: line 71: [: /usr/lib/hadoop/libexec/hadoop-config.sh: binary operator expected
 
If you are intending to install InfiniDB over Hadoop, the Hadoop sanity check did not pass.  
Most likely there is an environment setup conflict or the hdfs services are down.
Please Contact InfiniDB Customer Support.
InfinIDB RPM install completed
 
I use the Cloudera5.0 ,is it the reason for the "binary operator expected"
In documente, it says like this:
Versions of Apache Hadoop supported by InfiniDB:
•  Cloudera 4.x  (Both Package and Parcel)
•  HortonWorks 1.3
 
gongcheng911
gongcheng911's picture
Offline
Last seen: 3 weeks 40 min ago
Joined: May 21 2014
Junior Boarder

Posts: 13

qingsen zhou
I try to find the problem.

I try to find the problem.

and modify the /usr/local/Calpont/bin/setenv-hdfs-20 shell script line 71 from

 

if [ ! -z $libexec ]; then

 

to 

 

if [ ! -z "$libexec" ]; then

 

and it seems ok

 

and then I remove the # before the line below

 

echo "Using"
echo "Hadoop bash path " $basepath
echo "Hadoop library path " $libpath
echo "Hadoop execlib path " $HADOOP_LIBEXEC_DIR
echo "java_home " $JAVA_HOME
echo "java_path " $javalibpath

 

 

then I got 

 

Using
Hadoop bash path  /usr
Hadoop library path  /usr/lib/impala/lib
Hadoop execlib path  /usr/lib/hadoop/libexec
java_home  /usr/jdk1.7.0_60
java_path  /usr/jdk1.7.0_60/jre/lib/amd64/server
 
Is  "Hadoop library path  /usr/lib/impala/lib" a wrong lib path?
and line 71 I have changed is a bug or not?
davidhill
davidhill's picture
Offline
Last seen: 1 month 2 weeks ago
Joined: Oct 27 2009
Administrator

Posts: 595

david hill
Cloudera 5.0 install issue

We haven't performed any certitication testing with Cloudera 5.0 at this time.

 

But from what you report, it does look like the installation paths have changed amount other things.

 

I will open a BUG on these issues, thanks for reporting them.

 

David

gongcheng911
gongcheng911's picture
Offline
Last seen: 3 weeks 40 min ago
Joined: May 21 2014
Junior Boarder

Posts: 13

qingsen zhou
cdh info

here is the cdh info:

 
[root@hadoop-master bin]# hadoop version
Hadoop 2.3.0-cdh5.0.0
Subversion git://github.sf.cloudera.com/CDH/cdh.git -r 8e266e052e423af592871e2dfe09d54c03f6a0e8
Compiled by jenkins on 2014-03-28T04:30Z
Compiled with protoc 2.5.0
From source with checksum fae92214f92a3313887764456097e0
This command was run using /usr/lib/hadoop/hadoop-common-2.3.0-cdh5.0.0.jar
 
it seems like the same as you have tested.
 
 
here is the /etc/profile 
export JAVA_HOME=/usr/jdk1.7.0_60
export HADOOP_HOME=/usr/cdh/hadoop-2.3.0-cdh5.0.2
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=/usr/bin:$PATH:$JAVA_HOME/bin:JAVA/jre/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
 
and /root/.bashrc
# InfiniDB Alias Commands
#
alias idbmysql='/usr/local/Calpont/mysql/bin/mysql --defaults-file=/usr/local/Calpont/mysql/my.cnf -u root'
alias cc=/usr/local/Calpont/bin/calpontConsole
alias cmconsole=/usr/local/Calpont/bin/calpontConsole
alias home='cd /usr/local/Calpont/'
alias log='cd /var/log/Calpont/'
alias core='cd /var/log/Calpont/corefiles'
alias tmsg='tail -f /var/log/messages'
alias tdebug='tail -f /var/log/Calpont/debug.log'
alias tinfo='tail -f /var/log/Calpont/info.log'
alias dbrm='cd /usr/local/Calpont/data1/systemFiles/dbrm'
alias module='cat /usr/local/Calpont/local/module'
 

 

davidhill
davidhill's picture
Offline
Last seen: 1 month 2 weeks ago
Joined: Oct 27 2009
Administrator

Posts: 595

david hill
hadoop 5.0

If you could, could you provide some additional information on the specific version you have install, how it is install (Packages, Parcel, or custom).

We are testing with the Downloaded Cloudera Manager Express versions.

We have done a test with both the 5.0.0 and 5.1, and they both installed correctly. So curious what might be different on your system.

 

run this to get version:

 

# hadoop version

 

 

Here is the versions we have tested successfuly rith:

 

 [root@ss-data1 ~]# hadoop version

Hadoop 2.3.0-cdh5.0.0
Subversion git://github.sf.cloudera.com/CDH/cdh.git -r 8e266e052e423af592871e2dfe09d54c03f6a0e8
Compiled by jenkins on 2014-03-28T04:30Z
Compiled with protoc 2.5.0
From source with checksum fae92214f92a3313887764456097e0
This command was run using /usr/lib/hadoop/hadoop-common-2.3.0-cdh5.0.0.jar
[root@ss-data1 ~]# 
 
 
[root@ss1-data1 bin]# hadoop version
Hadoop 2.3.0-cdh5.1.0
Subversion git://github.sf.cloudera.com/CDH/cdh.git -r 8e266e052e423af592871e2dfe09d54c03f6a0e8
Compiled by jenkins on 2014-07-12T13:49Z
Compiled with protoc 2.5.0
From source with checksum 7ec68264497939dee7ab5b91250cbd9
This command was run using /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/hadoop-common-2.3.0-cdh5.1.0.jar
 
Thanks, David

 

gongcheng911
gongcheng911's picture
Offline
Last seen: 3 weeks 40 min ago
Joined: May 21 2014
Junior Boarder

Posts: 13

qingsen zhou
I got a new problem when I

I got a new problem when I load hdfs data into infinidb cluster(not hadoop mode)

 

[root@hadoop-master bin]# ./sqoop export -D mapred.task.timeout=0 --direct --connect jdbc:infinidb://hadoop-master/test --username root --table usersum --export-dir /testroot/data/sum_20140711*.csv --input-fields-terminated-by '\t'
14/07/26 14:46:33 INFO tool.BaseSqoopTool: Found an InfiniDB connect string, using a mysql connection string for compatibility
14/07/26 14:46:33 INFO tool.BaseSqoopTool: Using InfiniDB-specific delimiters for output if not explicitly specified
14/07/26 14:46:33 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
14/07/26 14:46:33 INFO tool.CodeGenTool: Beginning code generation
14/07/26 14:46:33 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `usersum` AS t LIMIT 1
14/07/26 14:46:33 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `usersum` AS t LIMIT 1
14/07/26 14:46:34 INFO orm.CompilationManager: HADOOP_HOME is /usr/cdh/hadoop-2.3.0-cdh5.0.2
Note: /tmp/sqoop-root/compile/478eed3401a8f47a392ad78c19ee54c5/usersum.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
14/07/26 14:46:35 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/478eed3401a8f47a392ad78c19ee54c5/usersum.jar
14/07/26 14:46:35 INFO mapreduce.ExportJobBase: Beginning export of usersum
14/07/26 14:46:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/07/26 14:46:36 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/07/26 14:46:37 WARN mapreduce.ExportJobBase: Input path hdfs://hadoop-master:8020/testroot/data/sum_20140711*.csv does not exist
14/07/26 14:46:37 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
14/07/26 14:46:37 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/07/26 14:46:37 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/192.168.1.62:8032
14/07/26 14:46:39 INFO input.FileInputFormat: Total input paths to process : 3
14/07/26 14:46:39 INFO mapreduce.InfiniDBExportInputFormat: Adding blocks to split hadoop-slave01
14/07/26 14:46:39 INFO mapreduce.InfiniDBExportInputFormat:    path hdfs://hadoop-master:8020/testroot/data/sum_201407110900.csv offset 0 length 35990671
14/07/26 14:46:39 INFO mapreduce.InfiniDBExportInputFormat: Adding blocks to split hadoop-slave02
14/07/26 14:46:39 INFO mapreduce.InfiniDBExportInputFormat:    path hdfs://hadoop-master:8020/testroot/data/sum_201407111000.csv offset 0 length 35990671
14/07/26 14:46:39 INFO mapreduce.InfiniDBExportInputFormat: Adding blocks to split hadoop-slave03
14/07/26 14:46:39 INFO mapreduce.InfiniDBExportInputFormat:    path hdfs://hadoop-master:8020/testroot/data/sum_201407110800.csv offset 0 length 35990671
14/07/26 14:46:39 INFO mapreduce.JobSubmitter: number of splits:3
14/07/26 14:46:39 INFO Configuration.deprecation: mapred.task.timeout is deprecated. Instead, use mapreduce.task.timeout
14/07/26 14:46:39 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1406356950848_0001
14/07/26 14:46:40 INFO impl.YarnClientImpl: Submitted application application_1406356950848_0001
14/07/26 14:46:40 INFO mapreduce.Job: The url to track the job: http://hadoop-master:8088/proxy/application_1406356950848_0001/
14/07/26 14:46:40 INFO mapreduce.Job: Running job: job_1406356950848_0001
14/07/26 14:46:47 INFO mapreduce.Job: Job job_1406356950848_0001 running in uber mode : false
14/07/26 14:46:47 INFO mapreduce.Job:  map 0% reduce 0%
14/07/26 14:46:47 INFO mapreduce.Job: Job job_1406356950848_0001 failed with state FAILED due to: Application application_1406356950848_0001 failed 2 times due to AM Container for appattempt_1406356950848_0001_000002 exited with  exitCode: 1 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: 
org.apache.hadoop.util.Shell$ExitCodeException: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
 
 
Container exited with a non-zero exit code 1
.Failing this attempt.. Failing the application.
14/07/26 14:46:47 INFO mapreduce.Job: Counters: 0
14/07/26 14:46:47 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
14/07/26 14:46:47 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 10.0758 seconds (0 bytes/sec)
14/07/26 14:46:47 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/07/26 14:46:47 INFO mapreduce.ExportJobBase: Exported 0 records.
14/07/26 14:46:47 ERROR tool.ExportTool: Error during export: Export job failed!
You have new mail in /var/spool/mail/root
 
It seems sqoop can't load the hdfs data into the infinidb(not hadoop mode)
or there is some err on my hadoop cluster.
ExitCodeException:  has nothing output ,so I don't know the reason yet.
davidhill
davidhill's picture
Offline
Last seen: 1 month 2 weeks ago
Joined: Oct 27 2009
Administrator

Posts: 595

david hill
[quote=gongcheng911]I got a

[quote=gongcheng911]

I got a new problem when I load hdfs data into infinidb cluster(not hadoop mode) [root@hadoop-master bin]# ./sqoop export -D mapred.task.timeout=0 --direct --connect jdbc:infinidb://hadoop-master/test --username root --table usersum --export-dir /testroot/data/sum_20140711*.csv --input-fields-terminated-by '\t'14/07/26 14:46:33 INFO tool.BaseSqoopTool: Found an InfiniDB connect string, using a mysql connection string for compatibility14/07/26 14:46:33 INFO tool.BaseSqoopTool: Using InfiniDB-specific delimiters for output if not explicitly specified14/07/26 14:46:33 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.14/07/26 14:46:33 INFO tool.CodeGenTool: Beginning code generation14/07/26 14:46:33 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `usersum` AS t LIMIT 114/07/26 14:46:33 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `usersum` AS t LIMIT 114/07/26 14:46:34 INFO orm.CompilationManager: HADOOP_HOME is /usr/cdh/hadoop-2.3.0-cdh5.0.2Note: /tmp/sqoop-root/compile/478eed3401a8f47a392ad78c19ee54c5/usersum.java uses or overrides a deprecated API.Note: Recompile with -Xlint:deprecation for details.14/07/26 14:46:35 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/478eed3401a8f47a392ad78c19ee54c5/usersum.jar14/07/26 14:46:35 INFO mapreduce.ExportJobBase: Beginning export of usersum14/07/26 14:46:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable14/07/26 14:46:36 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar14/07/26 14:46:37 WARN mapreduce.ExportJobBase: Input path hdfs://hadoop-master:8020/testroot/data/sum_20140711*.csv does not exist14/07/26 14:46:37 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative14/07/26 14:46:37 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps14/07/26 14:46:37 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/192.168.1.62:803214/07/26 14:46:39 INFO input.FileInputFormat: Total input paths to process : 314/07/26 14:46:39 INFO mapreduce.InfiniDBExportInputFormat: Adding blocks to split hadoop-slave0114/07/26 14:46:39 INFO mapreduce.InfiniDBExportInputFormat:    path hdfs://hadoop-master:8020/testroot/data/sum_201407110900.csv offset 0 length 3599067114/07/26 14:46:39 INFO mapreduce.InfiniDBExportInputFormat: Adding blocks to split hadoop-slave0214/07/26 14:46:39 INFO mapreduce.InfiniDBExportInputFormat:    path hdfs://hadoop-master:8020/testroot/data/sum_201407111000.csv offset 0 length 3599067114/07/26 14:46:39 INFO mapreduce.InfiniDBExportInputFormat: Adding blocks to split hadoop-slave0314/07/26 14:46:39 INFO mapreduce.InfiniDBExportInputFormat:    path hdfs://hadoop-master:8020/testroot/data/sum_201407110800.csv offset 0 length 3599067114/07/26 14:46:39 INFO mapreduce.JobSubmitter: number of splits:314/07/26 14:46:39 INFO Configuration.deprecation: mapred.task.timeout is deprecated. Instead, use mapreduce.task.timeout14/07/26 14:46:39 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1406356950848_000114/07/26 14:46:40 INFO impl.YarnClientImpl: Submitted application application_1406356950848_000114/07/26 14:46:40 INFO mapreduce.Job: The url to track the job: http://hadoop-master:8088/proxy/application_1406356950848_0001/14/07/26 14:46:40 INFO mapreduce.Job: Running job: job_1406356950848_000114/07/26 14:46:47 INFO mapreduce.Job: Job job_1406356950848_0001 running in uber mode : false14/07/26 14:46:47 INFO mapreduce.Job:  map 0% reduce 0%14/07/26 14:46:47 INFO mapreduce.Job: Job job_1406356950848_0001 failed with state FAILED due to: Application application_1406356950848_0001 failed 2 times due to AM Container for appattempt_1406356950848_0001_000002 exited with  exitCode: 1 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: org.apache.hadoop.util.Shell$ExitCodeException:         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)        at org.apache.hadoop.util.Shell.run(Shell.java:418)        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)        at java.util.concurrent.FutureTask.run(FutureTask.java:262)        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)        at java.lang.Thread.run(Thread.java:744)  Container exited with a non-zero exit code 1.Failing this attempt.. Failing the application.14/07/26 14:46:47 INFO mapreduce.Job: Counters: 014/07/26 14:46:47 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead14/07/26 14:46:47 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 10.0758 seconds (0 bytes/sec)14/07/26 14:46:47 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead14/07/26 14:46:47 INFO mapreduce.ExportJobBase: Exported 0 records.14/07/26 14:46:47 ERROR tool.ExportTool: Error during export: Export job failed!You have new mail in /var/spool/mail/root It seems sqoop can't load the hdfs data into the infinidb(not hadoop mode)or there is some err on my hadoop cluster.ExitCodeException:  has nothing output ,so I don't know the reason yet.

[/quote]

 

Please place this in a new thread and do the same for each new issue so they can be addressed accordingly.

Thanks

gongcheng911
gongcheng911's picture
Offline
Last seen: 3 weeks 40 min ago
Joined: May 21 2014
Junior Boarder

Posts: 13

qingsen zhou
how it is install (Packages,

how it is install (Packages, Parcel, or custom)?

I got the info from Eg.They used Clourea Manager to auto installed CDH5 hadoop

So I find the env. like HADOOP_HOME is unused.

eg. the resource manager for yarn port is 8080 in yarn-site.xml

but it actully 8082 in /etc/hadoop/conf/yarn-site.xml

and when i use jps and netstat to find the true, it is 8020.

then i change the yarn-site.xml in HADOOP_HOME to 8080,

sqoop can connect resource manager this time.

but still has the problem like letter before.

 

I think the install mode by Clourea Manager has the diff env,that cause many problem to 

the apps on hadoop that can't set parameters itself but try to get them from env, just like the scrip 

setenv-hdfs-20.