org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:Hadoop is not in the classpath/depende

Flink Standalone启动报错,需要把flink-1.11.2与hadoop3进行集成

一、错误汇总

Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.

Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme ‘hdfs’. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. For a full list of supported file systems

这个错误需要把flink-1.11.2与hadoop3进行集成。

错误日志如下所示:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
java.io.IOException: Could not create FileSystem for highly available storage path (hdfs://yfcluster/flink/ha/default)
        at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:103) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:117) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.<init>(TaskManagerRunner.java:133) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:306) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerSecurely$2(TaskManagerRunner.java:330) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:329) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:314) [flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:298) [flink-dist_2.12-1.11.2.jar:1.11.2]
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. For a full list of supported file systems, please see https:/   /ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/.
        at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:491) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.core.fs.Path.getFileSystem(Path.java:292) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        ... 9 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
        at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:487) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.core.fs.Path.getFileSystem(Path.java:292) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100) ~[flink-dist_2.12-1.11.2.jar:1.11.2]
        ... 9 more

二、解决方法如下:

系统环境变量增加HADOOP_CLASSPATH

1
2
3
4
5
vim /etc/profile

export HADOOP_HOME="/moudle/hadoop-3.3.0"
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export HADOOP_CLASSPATH=`hadoop classpath`

激活系统环境变量

1
source /etc/profile

按照上述设置后即不会出现报错,如有需要可进一步添加hadoop依赖:
maven中央仓库搜索flink-shaded-hadoop-3-uber,下载flink-shaded-hadoop-3-uber-3.1.1.7.1.1.0-565-9.0.jar包上传到/moudle/flink-1.11.2/lib目录下。

也可以去下面这个链接下载:
flink-shaded-hadoop-3-uber-3.1.1.7.1.1.0-565-9.0.jar的下载地址:
https://download.csdn.net/download/zhengzaifeidelushang/13117374

三、启动Flink Standalone HA集群

1
2
3
4
5
6
7
8
9
10
11
12
[root@bigdata1 log]# start-cluster.sh
Starting HA cluster with 2 masters.

Starting standalonesession daemon on host bigdata1.

Starting standalonesession daemon on host bigdata2.

Starting taskexecutor daemon on host bigdata1.

Starting taskexecutor daemon on host bigdata2.

Starting taskexecutor daemon on host bigdata3.

四、查看bigdata1、bigdata2、bigdata3节点进程

只列出Flink进程信息

1
2
3
4
5
6
7
8
9
10
11
[root@bigdata1 log]# jps
13558 TaskManagerRunner
13197 StandaloneSessionClusterEntrypoint

[root@bigdata2 ~]# jps
7428 TaskManagerRunner
7080 StandaloneSessionClusterEntrypoint


[root@bigdata3 ~]# jps
5408 TaskManagerRunner

五、查看Flink集群Web UI页面,访问http://bigdata1:8081

在这里插入图片描述