Welcome toVigges Developer Community-Open, Learning,Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
3.1k views
in Technique[技术] by (71.8m points)

hadoop2.10执行自带的测试程序不输出output目录

执行的命令:1. ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep input output 'dfs[a-z.]+'

使用的jdk8,hadoop2.0,执行测试数据时,并不会生成output目录,而是生成以下目录:

hadoop@code-shop:/usr/local/hadoop$ ./bin/hdfs dfs -ls
Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2021-01-19 23:39 grep-temp-1442452675
drwxr-xr-x   - hadoop supergroup          0 2021-01-19 22:51 input
hadoop@code-shop:/usr/local/hadoop$ 

运行过程中的日志:貌似并未发现明显错误。

hadoop@code-shop:/usr/local/hadoop$ ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep input output 'dfs[a-z.]+'
21/01/19 23:43:20 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
21/01/19 23:43:20 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
21/01/19 23:43:20 INFO input.FileInputFormat: Total input files to process : 8
21/01/19 23:43:21 INFO mapreduce.JobSubmitter: number of splits:8
21/01/19 23:43:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1446384809_0001
21/01/19 23:43:22 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
21/01/19 23:43:22 INFO mapreduce.Job: Running job: job_local1446384809_0001
21/01/19 23:43:22 INFO mapred.LocalJobRunner: OutputCommitter set in config null
21/01/19 23:43:22 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
21/01/19 23:43:22 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
21/01/19 23:43:22 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
21/01/19 23:43:22 INFO mapred.LocalJobRunner: Waiting for map tasks
21/01/19 23:43:22 INFO mapred.LocalJobRunner: Starting task: attempt_local1446384809_0001_m_000000_0
21/01/19 23:43:22 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
21/01/19 23:43:22 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
21/01/19 23:43:22 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
21/01/19 23:43:22 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/hadoop-policy.xml:0+10206
21/01/19 23:43:49 INFO mapreduce.Job: Job job_local1446384809_0001 running in uber mode : false
21/01/19 23:43:49 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
21/01/19 23:43:49 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
21/01/19 23:43:49 INFO mapred.MapTask: soft limit at 83886080
21/01/19 23:43:49 INFO mapreduce.Job:  map 0% reduce 0%
21/01/19 23:43:49 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
21/01/19 23:43:49 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
21/01/19 23:43:49 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
21/01/19 23:43:59 INFO mapred.LocalJobRunner: 
21/01/19 23:43:59 INFO mapred.MapTask: Starting flush of map output
21/01/19 23:43:59 INFO mapred.MapTask: Spilling map output
21/01/19 23:43:59 INFO mapred.MapTask: bufstart = 0; bufend = 17; bufvoid = 104857600
21/01/19 23:43:59 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214396(104857584); length = 1/6553600
21/01/19 23:43:59 INFO mapred.MapTask: Finished spill 0
21/01/19 23:43:59 INFO mapred.Task: Task:attempt_local1446384809_0001_m_000000_0 is done. And is in the process of committing
21/01/19 23:43:59 INFO mapred.LocalJobRunner: map
21/01/19 23:43:59 INFO mapred.Task: Task 'attempt_local1446384809_0001_m_000000_0' done.
21/01/19 23:43:59 INFO mapred.Task: Final Counters for attempt_local1446384809_0001_m_000000_0: Counters: 23
    File System Counters
        FILE: Number of bytes read=304459
        FILE: Number of bytes written=803155
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=10206
        HDFS: Number of bytes written=0
        HDFS: Number of read operations=5
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=1
    Map-Reduce Framework
        Map input records=237
        Map output records=1
        Map output bytes=17
        Map output materialized bytes=25
        Input split bytes=122
        Combine input records=1
        Combine output records=1
        Spilled Records=1
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=13
        Total committed heap usage (bytes)=234881024
    File Input Format Counters 
        Bytes Read=10206
21/01/19 23:43:59 INFO mapred.LocalJobRunner: Finishing task: attempt_local1446384809_0001_m_000000_0
21/01/19 23:43:59 INFO mapred.LocalJobRunner: Starting task: attempt_local1446384809_0001_m_000001_0
21/01/19 23:43:59 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
21/01/19 23:43:59 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
21/01/19 23:43:59 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
21/01/19 23:43:59 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/capacity-scheduler.xml:0+8814
Killed

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
等待大神解答

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to Vigges Developer Community for programmer and developer-Open, Learning and Share
...