共计 3932 个字符,预计需要花费 10 分钟才能阅读完成。
这篇文章主要讲解了“Spark MaprLab-Auction Data 实例分析”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着丸趣 TV 小编的思路慢慢深入,一起来研究和学习“Spark MaprLab-Auction Data 实例分析”吧!
一、环境安装
1. 安装 hadoop
2. 安装 spark
3. 启动 hadoop
4. 启动 spark
二、
1. 数据准备
从 MAPR 官网上下载数据 DEV360DATA.zip 并上传到 server 上。
[hadoop@hftclclw0001 spark-1.5.1-bin-hadoop2.6]$ pwd
/home/hadoop/spark-1.5.1-bin-hadoop2.6
[hadoop@hftclclw0001 spark-1.5.1-bin-hadoop2.6]$ cd test-data/
[hadoop@hftclclw0001 test-data]$ pwd
/home/hadoop/spark-1.5.1-bin-hadoop2.6/test-data/DEV360Data
[hadoop@hftclclw0001 DEV360Data]$ ll
total 337940
-rwxr-xr-x 1 hadoop root 575014 Jun 24 16:18 auctiondata.csv = c 测试用到的数据
-rw-r--r-- 1 hadoop root 57772855 Aug 18 20:11 sfpd.csv
-rwxrwxrwx 1 hadoop root 287692676 Jul 26 20:39 sfpd.json
[hadoop@hftclclw0001 DEV360Data]$ more auctiondata.csv
8213034705,95,2.927373,jake7870,0,95,117.5,xbox,3
8213034705,115,2.943484,davidbresler2,1,95,117.5,xbox,3
8213034705,100,2.951285,gladimacowgirl,58,95,117.5,xbox,3
8213034705,117.5,2.998947,daysrus,10,95,117.5,xbox,3
8213060420,2,0.065266,donnie4814,5,1,120,xbox,3
8213060420,15.25,0.123218,myreeceyboy,52,1,120,xbox,3
#数据结构如下
auctionid,bid,bidtime,bidder,bidrate,openbid,price,itemtype,daystolve
#把数据上传到 HDFS 中
[hadoop@hftclclw0001 DEV360Data]$ hdfs dfs -mkdir -p /spark/exer/mapr
[hadoop@hftclclw0001 DEV360Data]$ hdfs dfs -put auctiondata.csv /spark/exer/mapr
[hadoop@hftclclw0001 DEV360Data]$ hdfs dfs -ls /spark/exer/mapr
Found 1 items
-rw-r--r-- 2 hadoop supergroup 575014 2015-10-29 06:17 /spark/exer/mapr/auctiondata.csv
2. 运行 spark-shell 我用的 scala. 并针对以下 task,进行分析
tasks:
a.How many items were sold?
b.How many bids per item type?
c.How many different kinds of item type?
d.What was the minimum number of bids?
e.What was the maximum number of bids?
f.What was the average number of bids?
[hadoop@hftclclw0001 spark-1.5.1-bin-hadoop2.6]$ pwd
/home/hadoop/spark-1.5.1-bin-hadoop2.6
[hadoop@hftclclw0001 spark-1.5.1-bin-hadoop2.6]$ ./bin/spark-shell
scala
#首先从 HDFS 加载数据生成 RDD
scala val originalRDD = sc.textFile(/spark/exer/mapr/auctiondata.csv)
scala originalRDD == 我们来分析下 originalRDD 的类型 RDD[String] 可以看做是一条条 String 的数组,Array[String]
res26: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[1] at textFile at console :21
## 根据“,”把每一行分隔使用 map
scala val auctionRDD = originalRDD.map(_.split( ,))
scala auctionRDD == 我们来分析下 auctionRDD 的类型 RDD[Array[String]] 可以看做是 String 的数组, 但元素依然是数组即,可以认为 Array[Array[string]]
res17: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[5] at map at console :23
a.How many items were sold?
== val count = auctionRDD.map(bid = bid(0)).distinct().count()
根据 auctionid 去重即可:每条记录根据“,”分隔,再去重,再计数
# 获取第一列,即获取 auctionid,依然用 map
#可以这么理解下面一行,由于 auctionRDD 是 Array[Array[String]]那么进行 map 的每个参数类型是 Array[String], 由于 actionid 是数组的第一位,即获取第一个元素 Array(0), 注意是 () 不是[]
scala val auctionidRDD = auctionRDD.map(_(0))
scala auctionidRDD == 我们来分析下 auctionidRDD 的类型 RDD[String] , 理解为 Array[String], 即所有的 auctionid 的数组
res27: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[17] at map at console :26
#对 auctionidRDD 去重
scala val auctionidDistinctRDD=auctionidRDD.distinct()
scala auctionidDistinctRDD.count()
...
b.How many bids per item type?
=== auctionRDD.map(bid = (bid(7),1)).reduceByKey((x,y) = x + y).collect()
#map 每一行,获取出第 7 列,即 itemtype 那一列,输出(itemtype,1)
#可以看做输出的类型是 (String,Int) 的数组
scala auctionRDD.map(bid= (bid(7),1))
res30: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[26] at map at console :26
#reduceByKey 即按照 key 进行 reduce
#解析下 reduceByKey 对于相同的 key,
#(xbox,1)(xbox,1)(xbox,1)(xbox,1)...(xbox,1) == reduceByKey == (xbox,(..(((1 + 1) + 1) + ... + 1))
scala auctionRDD.map(bid= (bid(7),1)).reduceByKey((x,y) = x + y)
#类型依然是 (String,Int) 的数组 String= itemtype Int 已经是该 itemtype 的计数总和了
res31: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[28] at reduceByKey at console :26
#通过 collect() 转换成 Array 类型数组
scala auctionRDD.map(bid= (bid(7),1)).reduceByKey((x,y) = x + y).collect()
res32: Array[(String, Int)] = Array((palm,5917), (cartier,1953), (xbox,2784))
感谢各位的阅读,以上就是“Spark MaprLab-Auction Data 实例分析”的内容了,经过本文的学习后,相信大家对 Spark MaprLab-Auction Data 实例分析这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是丸趣 TV,丸趣 TV 小编将为大家推送更多相关知识点的文章,欢迎关注!