If you want to run spark to access the local file system, here is the simple way:
If you don't give HADOOP_CONF_DIR, spark will use /etc/hadoop/conf which may point to a cluster running in pseduo mode. When HADOOP_CONF_DIR points to a dir without any Hadoop configuration, the file system will be local. It also works for spark-submit.
HADOOP_CONF_DIR=. MASTER=local spark-shell