Hadoop 分布式缓存实现目的是在所有的MapReduce调用一个统一的配置文件,首先将缓存文件放置在HDFS中,然后程序在执行的过程中会可以通过设定将文件下载到本地具体设定如下:
public static void main(String[] arge) throws IOException, ClassNotFoundException, InterruptedException{
Configuration conf=new Configuration();
conf.set("fs.default.name", "hdfs://192.168.1.45:9000");
FileSystem fs=FileSystem.get(conf);
fs.delete(new Path("CASICJNJP/gongda/Test_gd20140104"));
conf.set("mapred.job.tracker", "192.168.1.45:9001");
conf.set("mapred.jar", "/home/hadoop/workspace/jar/OBDDataSelectWithImeiTxt.jar");
Job job=new Job(conf,"myTaxiAnalyze");
DistributedCache.createSymlink(job.getConfiguration());//
try {
DistributedCache.addCacheFile(new URI("/user/hadoop/CASICJNJP/DistributeFiles/imei.txt"), job.getConfiguration());
} catch (URISyntaxException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
job.setMapperClass(OBDDataSelectMaper.class);
job.setReducerClass(OBDDataSelectReducer.class);
//job.setNumReduceTasks(10);
//job.setCombinerClass(IntSumReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path("/user/hadoop/CASICJNJP/SortedData/20140104"));
FileOutputFormat.setOutputPath(job, new Path("CASICJNJP/gongda/SelectedData"));
System.exit(job.waitForCompletion(true)?0:1);
}
代码中标红的为将HDFS中的/user/hadoop/CASICJNJP/DistributeFiles/imei.txt作为分布式缓存
public class OBDDataSelectMaper extends Mapper
不在了