MAVEN_OPTS
to the value specified in the guide.java
and hit enter.If you receive a message 'Java' is not recognized as an internal or external command.
You need to configure your environment variables, JAVA_HOME
and PATH
to point to the path of jdk.SCALA_HOME
in Control PanelSystem and SecuritySystem
goto 'Adv System settings' and add %SCALA_HOME%bin
in PATH variable in environment variables. SBT_HOME
as an environment variable with value as <<SBT PATH>>
. winutils.exe
from HortonWorks repo or git repo. Since we don't have a local Hadoop installation on Windows we have to download winutils.exe
and place it in a bin
directory under a created Hadoop
home directory.Set HADOOP_HOME = <<Hadoop home directory>>
in environment variable. SPARK_HOME
and add %SPARK_HOME%bin
in PATH variable in environment variables.spark-shell
http://localhost:4040/
in a browser to see the SparkContext web UI.spark-2.0.0-bin-hadoop2.7.tar.gz
PATH
: ;%SPARK_HOME%bin;
%HADOOP_HOME%
which points to this directory, then add %HADOOP_HOME%bin
to PATH.py4j
python package. If you're running a virtual environment, run:~/Downloads/spark-1.4.0
folder):load_spark_environment_variables.py
in the default profile startup folder:sc
is your gateway towards everything sparkly.