2.3.3 Configuring OmniRPC Backend
The current OmniRPC backend uses the version 1.0 available at http://www.omni.hpcc.jp/OmniRPC/download.html.en.
This version requires that all Linux nodes harnessed by the backend have the same glibc as the PC used to compile YML.
This is due to the name resolution of RPC invocations.
We offer a specific version of OmniRPC available at http://www.omni.hpcc.jp/OmniRPC/download.html.en whose generated stubs are glibc-independant.
However, when there is only one version of glibc, we recommend to use the official OmniRPC package.
The deployment of OmniRPC has to be done both on the backend and on the workers' machines.
The documentation at http://www.omni.hpcc.jp/OmniRPC/htdocs/ clearly explain how to proceed.
Then, please take care that ssh keys of the backend linux account are generated and that public keys are paste into the .ssh/authorized_keys files on the workers' accounts.
Once OmniRPC and YML are installed, the stub that is invoked by the backend has to be registered on each worker's node.
A simple bash script is available in contrib/omnirpc/RegisterStub.sh in order to simplify this step.
Next, you have to edit the YML configuration files.
- yml.xcf: This file contains the list of plugins used by
YML. You must edit the group named Backend and set the module entry
to OmniRPCBackend and the init entry to omnirpc. Once this is
done YML will use the omnirpc backend in all its services.
- dr.xcf: OmniRPC backend makes use of the worker and by the
way of the data repository. You have to edit the data repository
configuration file. The important fields in this file are host and
port. Updates this fields according to the host that will be running
your data repository. Put the fully qualified name of the host. This is
the same host as the one running YML services. This host or at least the
port used must be reachable from all workers.
- omnirpc.xcf: This file list all the configuration entries of
the OmniRPC backend.
The general group contains:
The path group contains all paths required to download/upload on the DRServer: <install_directory>/var/yml/data/dr/<packin[packout|packadmin|OmniBinFiles]>. The name catalog denotes the path of the binaries catalog file: <install_directory>/var/yml/data/backend/omnirpc/catalog/
- the version of OmniRPC (currently 1.0),
- the path of the OmniRPC hostfile (see following explanations),
- the maximum number of requests that can be managed by the OmniRPC backend,
- the resource group name: it denotes the kind of binary that has to be download by a worker
The OmniRPC hostfile (given into omnirpc.xcf) specifies which machines will host invoked workers.
We give a sample of hostfile in contrib/omnirpc/hosts.xml.sample.
Besides, you may refer to the documentation at http://www.omni.hpcc.jp/OmniRPC/htdocs/ssh.html.
It explains in details how OmniRPC deals with RPC calls and the role of all elements and attributes.
We briefly describe the main attributes:
- (host) name: the worker's host name
- (host) user: the worker's account name
- (agent) path: full path which leads to the binary omrpc-agent (on the worker's host)
- (agent) invoker: protocol used by the backend in order to invoke the worker (please, choose ssh)
- (registry) path: full path which leads to the invoked stub file (on the worker's host). This stub file is created when you register the stub on the worker's host