共计 3462 个字符,预计需要花费 9 分钟才能阅读完成。
这篇文章跟大家分析一下“Spark SQL 的代码示例分析”。内容详细易懂,对“Spark SQL 的代码示例分析”感兴趣的朋友可以跟着丸趣 TV 小编的思路慢慢深入来阅读一下,希望阅读后能够对大家有所帮助。下面跟着丸趣 TV 小编一起深入学习“Spark SQL 的代码示例分析”的知识吧。
参考官网 Spark SQL 的例子,自己写了一个脚本:
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.createSchemaRDD
case class UserLog(userid: String, time1: String, platform: String, ip: String, openplatform: String, appid: String)
// Create an RDD of Person objects and register it as a table.
val user = sc.textFile(/user/hive/warehouse/api_db_user_log/dt=20150517/*).map(_.split( \\^)).map(u = UserLog(u(0), u(1), u(2), u(3), u(4), u(5)))
user.registerTempTable(user_log)
// SQL statements can be run by using the sql methods provided by sqlContext.
val allusers = sqlContext.sql(SELECT * FROM user_log)
// The results of SQL queries are SchemaRDDs and support all the normal RDD operations.
// The columns of a row in the result can be accessed by ordinal.
allusers.map(t = UserId: + t(0)).collect().foreach(println)
结果执行出错:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 50.0 failed 1 times, most recent failure: Lost task 1.0 in stage 50.0 (TID 73, localhost): java.lang.ArrayIndexOutOfBoundsException: 5
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(console :30)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(console :30)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1319)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1319)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1319)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:56)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
从日志可以看出,是数组越界了。
用命令
sc.textFile(/user/hive/warehouse/api_db_user_log/dt=20150517/*).map(_.split( \\^)).foreach(x = println(x.size))
发现有一行记录 split 出来的大小是“5”
6
15/05/21 20:47:37 INFO Executor: Finished task 0.0 in stage 2.0 (TID 4). 1774 bytes result sent to driver
15/05/21 20:47:37 INFO Executor: Finished task 1.0 in stage 2.0 (TID 5). 1774 bytes result sent to driver
原因是这行记录有空值“44671799^2015-03-27 20:56:05^2^117.93.193.238^0^^”
网上找到了解决办法——使用 split(str,int) 函数。修改后代码:
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.createSchemaRDD
case class UserLog(userid: String, time1: String, platform: String, ip: String, openplatform: String, appid: String)
// Create an RDD of Person objects and register it as a table.
val user = sc.textFile(/user/hive/warehouse/api_db_user_log/dt=20150517/*).map(_.split( \\^ , -1)).map(u = UserLog(u(0), u(1), u(2), u(3), u(4), u(5)))
user.registerTempTable(user_log)
// SQL statements can be run by using the sql methods provided by sqlContext.
val allusers = sqlContext.sql(SELECT * FROM user_log)
// The results of SQL queries are SchemaRDDs and support all the normal RDD operations.
// The columns of a row in the result can be accessed by ordinal.
allusers.map(t = UserId: + t(0)).collect().foreach(println)
关于 Spark SQL 的代码示例分析就分享到这里啦,希望上述内容能够让大家有所提升。如果想要学习更多知识,请大家多多留意丸趣 TV 小编的更新。谢谢大家关注一下丸趣 TV 网站!
正文完