文章目录
- 1.RPC.getProxy与Client.call
- 2. Server
1.RPC.getProxy与Client.call
在IPC发生之前,客户端需要通过RPC.getProxy
获得一个IPC接口实例,当不需要该接口实例时,必须通过RPC.stopProxy
释放资源。
client可以通过RPC提供的getProxy
和waitForProxy
两种方法得到,以getProxy的具体实现为例。
RPC.getProxy有3中重载方法,较复杂的方法中参数如下:
- protocol
- clientVersion
- addr
- conf
- factory
- rpcTimeout
- connectionRetryPolicy
- fallbackToSimpleAuth
这些参数表明了接口对象、接口版本、地址信息、客户端配置、创建不同套接字的工厂类等。
RPC.getProxy
直接调用了RPC.getProtocolProxy
方法,getProtocolProxy
方法如下:
public static <T> ProtocolProxy<T> getProtocolProxy(Class<T> protocol,
long clientVersion,
InetSocketAddress addr,
UserGroupInformation ticket,
Configuration conf,
SocketFactory factory,
int rpcTimeout,
RetryPolicy connectionRetryPolicy,
AtomicBoolean fallbackToSimpleAuth)
throws IOException {
if (UserGroupInformation.isSecurityEnabled()) {
SaslRpcServer.init(conf);
}
return getProtocolEngine(protocol, conf).getProxy(protocol, clientVersion,
addr, ticket, conf, factory, rpcTimeout, connectionRetryPolicy,
fallbackToSimpleAuth);
}
RPC类提供了getProtocolEngine类方法用于适配RPC框架当前使用的序列化引擎,以及getProtocolEngine用于获取序列化引擎对象。
上述源码中在获取代理对象时,会先调用getProtocolEngine(protocol, conf)
,然后调用RpcEngine.getProxy
执行构造Proxy
对象。Hadoop2.0之后支持protocol buffer序列化,所以在原Hadoop RPC的基础上进行了修改,提出一个RpcEngine接口,以支持第三方序列化方法,hadoop本身只实现了protocol 和 Writable序列化的engine。
我们进一步看序列化引擎的源码:
static synchronized RpcEngine getProtocolEngine(Class<?> protocol, Configuration conf) {
RpcEngine engine = PROTOCOL_ENGINES.get(protocol);
if (engine == null) {
// 默认 WritableRpcEngine.class 引擎
Class<?> impl = conf.getClass(ENGINE_PROP+"."+protocol.getName(),
WritableRpcEngine.class);
engine = (RpcEngine)ReflectionUtils.newInstance(impl, conf);
PROTOCOL_ENGINES.put(protocol, engine);
}
return engine;
}
可以看出,hadoop默认使用WritableRpcEngine
引擎。获得引擎后然后调用对应RpcEngine的getProxy方法,这里以WritableRPCEngine为例。
public <T> ProtocolProxy<T> getProxy(Class<T> protocol, long clientVersion,
InetSocketAddress addr, UserGroupInformation ticket,
Configuration conf, SocketFactory factory,
int rpcTimeout, RetryPolicy connectionRetryPolicy,
AtomicBoolean fallbackToSimpleAuth)
throws IOException {
if (connectionRetryPolicy != null) {
throw new UnsupportedOperationException("Not supported: connectionRetryPolicy=" + connectionRetryPolicy);
}
// 这里才调用到原生的代理
T proxy = (T) Proxy.newProxyInstance(protocol.getClassLoader(),
new Class[] { protocol }, new WritableRpcEngine.Invoker(protocol, addr, ticket, conf,
factory, rpcTimeout, fallbackToSimpleAuth));
return new ProtocolProxy<T>(protocol, proxy, true);
}
Proxy
实例化时传进去的InvocationHandler
的实现类是WritableRpcEngine的内部类Invoker。
当client端通过Proxy的实例化对象调用协议的相关接口时(即demo中proxy.hello(“didi”)),会调用WritableRpcEngine.Invoker中的invoke方法,代码如下:
private static class Invoker implements RpcInvocationHandler {
private Client.ConnectionId remoteId;
private Client client;
private boolean isClosed = false;
private final AtomicBoolean fallbackToSimpleAuth;
// 构造器
public Invoker(Class<?> protocol, InetSocketAddress address, UserGroupInformation ticket, Configuration conf, SocketFactory factory, int rpcTimeout, AtomicBoolean fallbackToSimpleAuth) throws IOException {
this.remoteId = Client.ConnectionId.getConnectionId(address, protocol, ticket, rpcTimeout, conf);
this.client = CLIENTS.getClient(conf, factory);
this.fallbackToSimpleAuth = fallbackToSimpleAuth;
}
// 执行的invoke方法
@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
...
ObjectWritable value;
try {
value = (ObjectWritable)
client.call(RPC.RpcKind.RPC_WRITABLE, new WritableRpcEngine.Invocation(method, args), remoteId, fallbackToSimpleAuth);
} finally {
if (traceScope != null) traceScope.close();
}
...
return value.get();
}
...
}
在构造器中,client是在Invoker的构造方法中实例化的。
在invoke方法中,这里调用了Client类的call方法。其中上述 new WritableRpcEngine.Invocation(method, args)
实现了Writable
接口,这里的作用是将method
和 args
进行序列化。
call的具体实现使用的是Client
类中的call方法,如下:
public Writable call(RpcKind rpcKind, Writable rpcRequest, Client.ConnectionId remoteId, int serviceClass, AtomicBoolean fallbackToSimpleAuth) throws IOException {
// 将远程调用信息封装成一个Client.Call对象 每个Call都有一个唯一的callId
Client.Call call = this.createCall(rpcKind, rpcRequest);
// 根据remoteId创建一个connection对象,并将call放到该对象的hashtable calls中
Client.Connection connection = this.getConnection(remoteId, call, serviceClass, fallbackToSimpleAuth);
// 下边会有发送至server的代码,见后分析
...
}
在call方法中先将远程调用信息封装成一个Client.Call对象,然后通过getConnection得到connection对象,将封装好的call对象放入connection对象的hashtable calls中
private Connection getConnection(ConnectionId remoteId,
Call call, int serviceClass, AtomicBoolean fallbackToSimpleAuth)
throws IOException {
if (!running.get()) {
// the client is stopped
throw new IOException("The client is stopped");
}
Connection connection;
do {
synchronized (connections) {
connection = connections.get(remoteId);
// 先从 connections中查找是否存在,不存在则创建
if (connection == null) {
// 这里new 只是对一些相关属性进行赋值,并没有真正的建立连接
connection = new Connection(remoteId, serviceClass);
connections.put(remoteId, connection);
}
}
} while (!connection.addCall(call));
// 建立连接
connection.setupIOstreams(fallbackToSimpleAuth);
return connection;
}
在setupIOstreams中建立连接,代码如下:
private synchronized void setupIOstreams(
AtomicBoolean fallbackToSimpleAuth) {
if (socket != null || shouldCloseConnection.get()) {
return;
}
try {
if (LOG.isDebugEnabled()) {
LOG.debug("Connecting to "+server);
}
if (Trace.isTracing()) {
Trace.addTimelineAnnotation("IPC client connecting to " + server);
}
short numRetries = 0;
Random rand = null;
while (true) {
// 建立socket进行连接 可见client是用socket进行通信
setupConnection();
InputStream inStream = NetUtils.getInputStream(socket);
OutputStream outStream = NetUtils.getOutputStream(socket);
writeConnectionHeader(outStream);
if (authProtocol == AuthProtocol.SASL) {
final InputStream in2 = inStream;
final OutputStream out2 = outStream;
UserGroupInformation ticket = remoteId.getTicket();
if (ticket.getRealUser() != null) {
ticket = ticket.getRealUser();
}
try {
authMethod = ticket
.doAs(new PrivilegedExceptionAction<AuthMethod>() {
@Override
public AuthMethod run()
throws IOException, InterruptedException {
return setupSaslConnection(in2, out2);
}
});
} catch (Exception ex) {
authMethod = saslRpcClient.getAuthMethod();
if (rand == null) {
rand = new Random();
}
handleSaslConnectionFailure(numRetries++, maxRetriesOnSasl, ex,
rand, ticket);
continue;
}
if (authMethod != AuthMethod.SIMPLE) {
// Sasl connect is successful. Let's set up Sasl i/o streams.
...
}
if (doPing) {
inStream = new PingInputStream(inStream);
}
this.in = new DataInputStream(new BufferedInputStream(inStream));
// SASL may have already buffered the stream
if (!(outStream instanceof BufferedOutputStream)) {
outStream = new BufferedOutputStream(outStream);
}
this.out = new DataOutputStream(outStream);
// 将内容写入消息
writeConnectionContext(remoteId, authMethod);
// update last activity time
touch();
if (Trace.isTracing()) {
Trace.addTimelineAnnotation("IPC client connected to " + server);
}
// start the receiver thread after the socket connection has been set up
start(); // 启动connection线程,等待接受server的response
return;
}
} catch (Throwable t) {
if (t instanceof IOException) {
markClosed((IOException)t);
} else {
markClosed(new IOException("Couldn't set up IO streams", t));
}
close();
}
}
上述代码中,通过setupConnection方法进行socket连接。
private synchronized void setupConnection() throws IOException {
short ioFailures = 0;
short timeoutFailures = 0;
while (true) {
try {
// 创建一个网络socket
this.socket = socketFactory.createSocket();
this.socket.setTcpNoDelay(tcpNoDelay);
this.socket.setKeepAlive(true);
/*
* Bind the socket to the host specified in the principal name of the
* client, to ensure Server matching address of the client connection
* to host name in principal passed.
*/
UserGroupInformation ticket = remoteId.getTicket();
if (ticket != null && ticket.hasKerberosCredentials()) {
KerberosInfo krbInfo =
remoteId.getProtocol().getAnnotation(KerberosInfo.class);
if (krbInfo != null && krbInfo.clientPrincipal() != null) {
String host =
SecurityUtil.getHostFromPrincipal(remoteId.getTicket().getUserName());
// If host name is a valid local address then bind socket to it
InetAddress localAddr = NetUtils.getLocalInetAddress(host);
if (localAddr != null) {
this.socket.bind(new InetSocketAddress(localAddr, 0));
}
}
}
NetUtils.connect(this.socket, server, connectionTimeout);
this.socket.setSoTimeout(soTimeout);
return;
} catch (ConnectTimeoutException toe) {
/* Check for an address change and update the local reference.
* Reset the failure counter if the address was changed
*/
if (updateAddress()) {
timeoutFailures = ioFailures = 0;
}
handleConnectionTimeout(timeoutFailures++,
maxRetriesOnSocketTimeouts, toe);
} catch (IOException ie) {
if (updateAddress()) {
timeoutFailures = ioFailures = 0;
}
handleConnectionFailure(ioFailures++, ie);
}
}
}
建立连接之后,启动connection线程,运行run方法,等待server端的response。代码:
public void run() {
if (LOG.isDebugEnabled())
LOG.debug(getName() + ": starting, having connections "
+ connections.size());
try {
while (waitForWork()) {//wait here for work - read or close connection
receiveRpcResponse();
}
} catch (Throwable t) {
// This truly is unexpected, since we catch IOException in receiveResponse
// -- this is only to be really sure that we don't leave a client hanging
// forever.
LOG.warn("Unexpected error reading responses on connection " + this, t);
markClosed(new IOException("Error reading responses", t));
}
close();
if (LOG.isDebugEnabled())
LOG.debug(getName() + ": stopped, remaining connections "
+ connections.size());
}
getConnection逻辑已经初步完结,现在回到call代码中,继续分析其代码,下面的关键代码是connection.sendRpcRequest(call),发送call对象到server端,并进入阻塞状态等待server的response。代码如下:
public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest, ConnectionId remoteId, int serviceClass, AtomicBoolean fallbackToSimpleAuth) throws IOException {
final Call call = createCall(rpcKind, rpcRequest);
Connection connection = getConnection(remoteId, call, serviceClass, fallbackToSimpleAuth);
try {
// 将远程调用信息发送给server端
connection.sendRpcRequest(call); // send the rpc request
} catch (RejectedExecutionException e) {
throw new IOException("connection has been closed", e);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IOException(e);
}
synchronized (call) {
// 判断call是否完成,等待server端notify
while (!call.done) {
try {
// 当前线程blocking住,
// 等待Connection线程中receiveRpcResponse调用call.notify
call.wait(); // wait for the result
} catch (InterruptedException ie) {
Thread.currentThread().interrupt();
throw new InterruptedIOException("Call interrupted");
}
}
if (call.error != null) {
if (call.error instanceof RemoteException) {
call.error.fillInStackTrace();
throw call.error;
} else { // local exception
InetSocketAddress address = connection.getRemoteAddress();
throw NetUtils.wrapException(address.getHostName(),
address.getPort(),
NetUtils.getHostname(),
0,
call.error);
}
} else {
// 得到server结果
return call.getRpcResponse();
}
}
}
public void sendRpcRequest(final Call call)
throws InterruptedException, IOException {
if (shouldCloseConnection.get()) {
return;
}
// Serialize the call to be sent. This is done from the actual
// caller thread, rather than the sendParamsExecutor thread,
// so that if the serialization throws an error, it is reported
// properly. This also parallelizes the serialization.
//
// Format of a call on the wire:
// 0) Length of rest below (1 + 2)
// 1) RpcRequestHeader - is serialized Delimited hence contains length
// 2) RpcRequest
//
// Items '1' and '2' are prepared here.
final DataOutputBuffer d = new DataOutputBuffer();
RpcRequestHeaderProto header = ProtoUtil.makeRpcRequestHeader(
call.rpcKind, OperationProto.RPC_FINAL_PACKET, call.id, call.retry,
clientId);
header.writeDelimitedTo(d);
call.rpcRequest.write(d);
synchronized (sendRpcRequestLock) {
Future<?> senderFuture = sendParamsExecutor.submit(new Runnable() {
@Override
public void run() {
try {
synchronized (Connection.this.out) {
if (shouldCloseConnection.get()) {
return;
}
if (LOG.isDebugEnabled())
LOG.debug(getName() + " sending #" + call.id);
byte[] data = d.getData();
int totalLength = d.getLength();
out.writeInt(totalLength); // Total Length
out.write(data, 0, totalLength);// RpcRequestHeader + RpcRequest
out.flush();
}
} catch (IOException e) {
// exception at this point would leave the connection in an
// unrecoverable state (eg half a call left on the wire).
// So, close the connection, killing any outstanding calls
markClosed(e);
} finally {
//the buffer is just an in-memory buffer, but it is still polite to
// close early
IOUtils.closeStream(d);
}
}
});
try {
// 等待call发送完毕之后才退出该方法,回到call方法中继续执行
senderFuture.get();
} catch (ExecutionException e) {
Throwable cause = e.getCause();
// cause should only be a RuntimeException as the Runnable above
// catches IOException
if (cause instanceof RuntimeException) {
throw (RuntimeException) cause;
} else {
throw new RuntimeException("unexpected checked exception", cause);
}
}
}
}
接下来分析Server代码。
2. Server
先见 http://bigdatadecode.club/Hadoop%20RPC%20%E8%A7%A3%E6%9E%90.html
后续补充