Announcement

Collapse
No announcement yet.

Unable to run CloudSuite Graph Analytics or In-Memory Analytics on RHEL 8/podman

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Unable to run CloudSuite Graph Analytics or In-Memory Analytics on RHEL 8/podman

    I'm running RHEL 8 in an AWS m5.metal instance (48 cores, 384GB memory, 500GB io1 disk, 25GB network).

    I'm trying to run several Phoronix benchmarks on the system. I can run the Phoronix CloudSuite Data Analytics test just fine. It creates containers and runs the benchmarks. But when I run CloudSuite Graph Analytics or CloudSuite In-Memory Analytics, the containers are built but die instantly.

    RHEL 8 uses podman instead of Docker, so I installed the podman-docker package so that if the Phoronix test calls docker, it's linked to podman. Podman is supposed to take all the same command line args as docker.

    When I run the CloudSuite Data Analytics test, it runs successfully and I get this in the journal (sorry it's so long):



    If I run CloudSuite Graph Analytics, I get this in the journal:

    http://www.camerontech.com/failed.html

    Since the test doesn't run, there is no log under /var/lib/phoronix-test-suite/test-results, which is incredibly frustrating. And /var/log/phoronix-test-suite-benchmark.log has four lines in it, the test starting and stopping.

    Any ideas why these tests are not running?

  • #2
    For what it's worth, here's the output running podman directly from the console:

    [root@ip-172-31-18-125 ~]# podman run docker.io/cloudsuite/graph-analytics:latest
    Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
    20/04/16 03:08:58 INFO SparkContext: Running Spark version 2.1.0
    20/04/16 03:08:58 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    20/04/16 03:08:58 INFO SecurityManager: Changing view acls to: root
    20/04/16 03:08:58 INFO SecurityManager: Changing modify acls to: root
    20/04/16 03:08:58 INFO SecurityManager: Changing view acls groups to:
    20/04/16 03:08:58 INFO SecurityManager: Changing modify acls groups to:
    20/04/16 03:08:58 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
    20/04/16 03:08:58 INFO Utils: Successfully started service 'sparkDriver' on port 40427.
    20/04/16 03:08:58 INFO SparkEnv: Registering MapOutputTracker
    20/04/16 03:08:58 INFO SparkEnv: Registering BlockManagerMaster
    20/04/16 03:08:58 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
    20/04/16 03:08:58 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
    20/04/16 03:08:58 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-e670d32e-e658-4f17-b8c1-0b6c1c3d7190
    20/04/16 03:08:58 INFO MemoryStore: MemoryStore started with capacity 408.9 MB
    20/04/16 03:08:59 INFO SparkEnv: Registering OutputCommitCoordinator
    20/04/16 03:08:59 INFO Utils: Successfully started service 'SparkUI' on port 4040.
    20/04/16 03:08:59 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.88.0.45:4040
    20/04/16 03:08:59 INFO SparkContext: Added JAR file:/benchmarks/graph-analytics_2.10-1.0.jar at spark://10.88.0.45:40427/jars/graph-analytics_2.10-1.0.jar with timestamp 1587006539192
    20/04/16 03:08:59 INFO Executor: Starting executor ID driver on host localhost
    20/04/16 03:08:59 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransfer Service' on port 41571.
    20/04/16 03:08:59 INFO NettyBlockTransferService: Server created on 10.88.0.45:41571
    20/04/16 03:08:59 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPol icy for block replication policy
    20/04/16 03:08:59 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.88.0.45, 41571, None)
    20/04/16 03:08:59 INFO BlockManagerMasterEndpoint: Registering block manager 10.88.0.45:41571 with 408.9 MB RAM, BlockManagerId(driver, 10.88.0.45, 41571, None)
    20/04/16 03:08:59 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.88.0.45, 41571, None)
    20/04/16 03:08:59 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.88.0.45, 41571, None)
    20/04/16 03:08:59 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 236.5 KB, free 408.7 MB)
    20/04/16 03:08:59 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 22.9 KB, free 408.6 MB)
    20/04/16 03:08:59 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.88.0.45:41571 (size: 22.9 KB, free: 408.9 MB)
    20/04/16 03:08:59 INFO SparkContext: Created broadcast 0 from textFile at GraphLoader.scala:73
    Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/data/edges.csv
    at org.apache.hadoop.mapred.FileInputFormat.singleThr eadedListStatus(FileInputFormat.java:287)
    at org.apache.hadoop.mapred.FileInputFormat.listStatu s(FileInputFormat.java:229)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits (FileInputFormat.java:315)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(Hadoo pRDD.scala:202)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.app ly(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.app ly(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartition s(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.app ly(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.app ly(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartition s(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.app ly(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.app ly(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.SparkContext.runJob(SparkContext. scala:1958)
    at org.apache.spark.rdd.RDD.count(RDD.scala:1157)
    at org.apache.spark.graphx.GraphLoader$.edgeListFile( GraphLoader.scala:94)
    at GraphAnalytics$.main(GraphAnalytics.scala:31)
    at GraphAnalytics.main(GraphAnalytics.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Nativ e Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Native MethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(De legatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$sp ark$deploy$SparkSubmit$$runMain(SparkSubmit.scala: 738)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(S parkSubmit.scala:187)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkS ubmit.scala:212)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSub mit.scala:126)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubm it.scala)
    20/04/16 03:08:59 INFO SparkContext: Invoking stop() from shutdown hook
    20/04/16 03:08:59 INFO SparkUI: Stopped Spark web UI at http://10.88.0.45:4040
    20/04/16 03:08:59 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
    20/04/16 03:08:59 INFO MemoryStore: MemoryStore cleared
    20/04/16 03:08:59 INFO BlockManager: BlockManager stopped
    20/04/16 03:08:59 INFO BlockManagerMaster: BlockManagerMaster stopped
    20/04/16 03:08:59 INFO OutputCommitCoordinator$OutputCommitCoordinatorEnd point: OutputCommitCoordinator stopped!
    20/04/16 03:08:59 INFO SparkContext: Successfully stopped SparkContext
    20/04/16 03:08:59 INFO ShutdownHookManager: Shutdown hook called
    20/04/16 03:08:59 INFO ShutdownHookManager: Deleting directory /tmp/spark-ea61dbe1-c5ac-4f8b-8986-e14408f86f44

    Any idea why this is failing with podman but works fine with docker?

    Comment

    Working...
    X