Apache SSHD is a 100% pure java library to support the SSH protocols on both the client and server side. This library can leverage Apache MINA, a scalable and high performance asynchronous IO library. SSHD does not really aim at being a replacement for the SSH client or SSH server from Unix operating systems, but rather provides support for Java based applications requiring SSH support.
-
Java 8+ (as of version 1.3)
The code only requires the core abstract slf4j-api module. The actual implementation of the logging API can be selected from the many existing adaptors.
Required mainly for writing keys to PEM files or for special keys/ciphers/etc. that are not part of the standard Java Cryptography Extension. See Java Cryptography Architecture (JCA) Reference Guide for key classes and explanations as to how Bouncy Castle is plugged in (other security providers).
Caveat: If Bouncy Castle modules are registered, then the code will use its implementation of the ciphers, keys, signatures, etc. rather than the default JCE provided in the JVM.
Note:
-
The security provider can also be registered for keys/ciphers/etc. that are already supported by the standard JCE as a replacement for them.
-
The BouncyCastle code can also be used to load keys from PEM files instead or in parallel with the built-in code that already parses the standard PEM formats for the default JCE supported key types.
-
One can use the
BouncyCastleKeyPairResourceParser
to load standard PEM files instead of the core one - either directly or viaSecurityUtils#setKeyPairResourceParser
for global usage - even without registering or enabling the provider. -
The required Maven module(s) are defined as
optional
so must be added as an explicit dependency in order to be included in the classpath:
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcpg-jdk15on</artifactId>
</dependency>
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcpkix-jdk15on</artifactId>
</dependency>
Optional dependency to enable choosing between NIO asynchronous sockets (the default - for improved performance), and "legacy" sockets. See IoServiceFactoryFactory
implementations and specifically the DefaultIoServiceFactoryFactory
for the available options and how it can be configured to select among them. Note: the required Maven module(s) are defined as optional
so must be added as an explicit dependency in order to be included in the classpath:
<dependency>
<groupId>org.apache.mina</groupId>
<artifactId>mina-core</artifactId>
<!-- see SSHD POM for latest tested known version of MINA core -->
<version>2.0.17</version>
</dependency>
NOTE: in order to use this library one must also add the sshd-mina
artifact
<dependency>
<groupId>org.apache.sshd</groupId>
<artifactId>sshd-mina</artifactId>
<version>...same as sshd-core...</version>
</dependency>
Required for supporting ssh-ed25519 keys and ed25519-sha-512 signatures. Note: the required Maven module(s) are defined as optional
so must be added as an explicit dependency in order to be included in the classpath:
<!-- For ed25519 support -->
<dependency>
<groupId>net.i2p.crypto</groupId>
<artifactId>eddsa</artifactId>
</dependency>
The code contains support for reading ed25519 OpenSSH formatted private keys.
SSHD is designed to easily allow setting up and using an SSH client in a few simple steps. The client needs to be configured and then started before it can be used to connect to an SSH server. There are a few simple steps for creating a client instance - for more details refer to the SshClient
class.
This is simply done by calling
SshClient client = SshClient.setupDefaultClient();
The call will create an instance with a default configuration suitable for most use cases - including ciphers, compression, MACs, key exchanges, signatures, etc... If your code requires some special configuration, you can look at the code for setupDefaultClient
and checkConfig
as a reference for available options and configure the SSH client the way you need.
The SSH client contains some security related configuration that one needs to consider
client.setServerKeyVerifier(...);
sets up the server key verifier. As part of the SSH connection initialization protocol, the server proves its "identity" by presenting a public key. The client can examine the key (e.g., present it to the user via some UI) and decide whether to trust the server and continue with the connection setup. By default the client is initialized with an AcceptAllServerKeyVerifier
that simply logs a warning that an un-verified server key was accepted. There are other out-of-the-box verifiers available in the code:
-
RejectAllServerKeyVerifier
- rejects all server key - usually used in tests or as a fallback verifier if none of it predecesors validated the server key -
RequiredServerKeyVerifier
- accepts only one specific server key (similar to certificate pinning for SSL) -
KnownHostsServerKeyVerifier
- uses the known_hosts file to validate the server key. One can use this class + some existing code to update the file when new servers are detected and their keys are accepted.
Of course, one can implement the verifier in whatever other manner is suitable for the specific code needs.
One can set up the public/private keys to be used in case a password-less authentication is needed. By default, the client is configured to automatically detect and use the identity files residing in the user's ~/.ssh folder (e.g., id_rsa, id_ecdsa) and present them as part of the authentication process. Note: if the identity files are encrypted via a password, one must configure a FilePasswordProvider
so that the code can decrypt them before using and presenting them to the server as part of the authentication process. Reading key files in PEM format (including encrypted ones) is supported by default for the standard keys and formats. Using additional non-standard special features requires that the Bouncy Castle supporting artifacts be available in the code's classpath. One can also read files in
OpenSSH
format without any specific extra artifacts (although for reading ed25519 keys one needs to add the EdDSA support
artifacts). Note: for the time being, password encrypted ed25519 private key files are not supported.
This interface is required for full support of keyboard-interactive
authentication protocol as described in RFC 4256. The client can handle a simple password request from the server, but if more complex challenge-response interaction is required, then this interface must be provided - including support for SSH_MSG_USERAUTH_PASSWD_CHANGEREQ
as described in RFC 4252 section 8.
While RFC-4256 support is the primary purpose of this interface, it can also be used to retrieve the server's welcome banner as described in RFC 4252 section 5.4 as well as its initial identification string as described in RFC 4253 section 4.2.
Once the SshClient
instance is properly configured it needs to be start()
-ed in order to connect to a server. Note: one can use a single SshClient
instance to connnect to multiple servers as well as modifying the default configuration (ciphers, MACs, keys, etc.) on a per-session manner (see more in the Advanced usage section). Furthermore, one can change almost any configured SshClient
parameter - although its influence on currently established sessions depends on the actual changed configuration. Here is how a typical usage would look like
SshClient client = SshClient.setupDefaultClient();
// override any default configuration...
client.setSomeConfiguration(...);
client.setOtherConfiguration(...);
client.start();
// using the client for multiple sessions...
try (ClientSession session = client.connect(user, host, port).verify(...timeout...).getSession()) {
session.addPasswordIdentity(...password..); // for password-based authentication
// or
session.addPublicKeyIdentity(...key-pair...); // for password-less authentication
// Note: can add BOTH password AND public key identities - depends on the client/server security setup
session.auth().verify(...timeout...);
// start using the session to run commands, do SCP/SFTP, create local/remote port forwarding, etc...
}
// NOTE: this is just an example - one can open multiple concurrent sessions using the same client.
// No need to close the previous session before establishing a new one
try (ClientSession anotherSession = client.connect(otherUser, otherHost, port).verify(...timeout...).getSession()) {
anotherSession.addPasswordIdentity(...password..); // for password-based authentication
anotherSession.addPublicKeyIdentity(...key-pair...); // for password-less authentication
anotherSession.auth().verify(...timeout...);
// start using the session to run commands, do SCP/SFTP, create local/remote port forwarding, etc...
}
// exiting in an orderly fashion once the code no longer needs to establish SSH session
// NOTE: this can/should be done when the application exits.
client.stop();
SSHD is designed to be easily embedded in your application as an SSH server. The embedded SSH server needs to be configured before it can be started. Essentially, there are a few simple steps for creating the server - for more details refer to the SshServer
class.
Creating an instance of SshServer
is as simple as creating a new object
SshServer sshd = SshServer.setUpDefaultServer();
It will configure the server with sensible defaults for ciphers, macs, key exchange algorithm, etc... If different behavior is required, one should consult the code of the setUpDefaultServer
as well as checkConfig
methods as a reference for available options and configure the SSH server the way it is needed.
There are a few things that need to be configured on the server before being able to actually use it:
- Port -
sshd.setPort(22);
- sets the listen port for the server instance. If not set explicitly then a random free port is selected by the O/S. In any case, once the server isstart()
-ed one can query the instance as to the assigned port viasshd.getPort()
.
In this context, the listen bind address can also be specified explicitly via sshd.setHost(...some IP address...)
that causes the server to bind to a specific network address rather than all addresses (the default). Using "0.0.0.0"
as the bind address is also tantamount to binding to all addresses.
-
KeyPairProvider
-sshd.setKeyPairProvider(...);
- sets the host's private keys used for key exchange with clients as well as representing the host's "identities". There are several choices - one can load keys from standard PEM files or generate them in the code. It's usually a good idea to save generated keys, so that if the SSHD server is restarted, the same keys will be used to authenticate the server and avoid the warning the clients might get if the host keys are modified. Note: saving key files in PEM format requires that the Bouncy Castle supporting artifacts be available in the code's classpath. -
ShellFactory
- That's the part one usually has to write to customize the SSHD server. The shell factory will be used to create a new shell each time a user logs in and wants to run an interactive shell. SSHD provides a simple implementation that you can use if you want. This implementation will create a process and delegate everything to it, so it's mostly useful to launch the OS native shell. E.g.,
sshd.setShellFactory(new ProcessShellFactory(new String[] { "/bin/sh", "-i", "-l" }));
There is an out-of-the-box InteractiveProcessShellFactory
that detects the O/S and spawns the relevant shell. Note that the ShellFactory
is not required. If none is configured, any request for an interactive shell will be denied to clients.
CommandFactory
- TheCommandFactory
provides the ability to run a single direct command at a time instead of an interactive session (it also uses a different channel type than shells). It can be used in addition to theShellFactory
.
SSHD provides a CommandFactory
to support SCP that can be configured in the following way:
sshd.setCommandFactory(new ScpCommandFactory());
One can also use the ScpCommandFactory
on top of one's own CommandFactory
by placing the command factory as a delegate of the ScpCommandFactory
. The ScpCommandFactory
will intercept SCP commands and execute them by itself, while passing all other commands to the delegate CommandFactory
sshd.setCommandFactory(new ScpCommandFactory(myCommandFactory));
Note that using a CommandFactory
is also optional. If none is configured, any direct command sent by clients will be rejected.
The SSHD server security layer has to be customized to suit your needs. This layer is pluggable and uses the following interfaces:
PasswordAuthenticator
for password based authentication - RFC 4252 section 8PublickeyAuthenticator
for key based authentication - RFC 4252 section 7HostBasedAuthenticator
for host based authentication - RFC 4252 section 9KeyboardInteractiveAuthenticator
for user interactive authentication - RFC 4256
These custom classes can be configured on the SSHD server using the respective setter methods:
sshd.setPasswordAuthenticator(new MyPasswordAuthenticator());
sshd.setPublickeyAuthenticator(new MyPublickeyAuthenticator());
sshd.setKeyboardInteractiveAuthenticator(new MyKeyboardInteractiveAuthenticator());
...etc...
Several useful implementations are available that can be used as-is or extended in order to provide some custom behavior. In any case, the default initializations are:
DefaultAuthorizedKeysAuthenticator
- uses the authorized_keys file the same way as the SSH daemon doesDefaultKeyboardInteractiveAuthenticator
- for password-based or interactive authentication. Note: this authenticator requires aPasswordAuthenticator
to be configured since it delegates some of the functionality to it.
SSH supports pluggable factories to define various configuration parts such as ciphers, digests, key exchange, etc... The list of supported implementations can be changed to suit one's needs, or one can also implement one's own factories.
Configuring supported factories can be done with the following code:
sshd.setCipherFactories(Arrays.asList(BuiltinCiphers.aes256ctr, BuiltinCiphers.aes192ctr, BuiltinCiphers.aes128ctr));
sshd.setKeyExchangeFactories(Arrays.asList(new MyKex1(), new MyKex2(), BuiltinKeyExchange.A, ...etc...));
One can configure other security components using built-in factories the same way. It is important to remember though that the order of the factories is important as it affects the key exchange phase where the client and server decide what options to use out of each peer's reported preferences.
Once we have configured the server, one need only call sshd.start();
. Note: once the server is started, all of the configurations (except the port) can still be overridden while the server is running (caveat emptor). In such cases, only new clients that connect to the server after the change will be affected - with the exception of the negotiation options (keys, macs, ciphers, etc...) which take effect the next time keys are re-exchanged, that can affect live sessions and not only new ones.
While the code supports BouncyCastle and EdDSA security providers out-of-the-box,
it also provides a way to add security providers via the SecurityProviderRegistrar
interface implementation. In order to add support for a new security provider one needs to implement the registrar interface and make the code aware of it.
The code contains built-in security provider registrars for BouncyCastle and EdDSA (a.k.a. ed25519
). It automatically detects the existence of the required artifacts (since they are optional dependencies) and executes the respective security provider registration. This behavior is controlled by the org.apache.sshd.security.registrars
system property. This property contains a comma-separated list of fully-qualified class names implementing the SecurityProviderRegistrar
interface and assumed to contain a default public no-arguments constructor. The code automatically parses the list and attempts to instantiate and invoke the registrar.
Note:
-
The registration code automatically parses the configured registrars list and instantiates them. In this context, one can use the special
none
value to indicate that the code should not attempt to automatically register the default providers. -
A registrar instance might be created but eventually discarded and not invoked if it is disabled, unsupported or already registered programmatically via
SecurityUtils#registerSecurityProvider
. -
The registration attempt is a one-shot deal - i.e., once the registrars list is parsed and successfully resolved, any modifications to the registered security providers must be done programatically. One can call
SecurityUtils#isRegistrationCompleted()
to find out if the registration phase has already been executed. -
The registrars are consulted in the same order as they were initially registered - either programmatically or via the system property configuration. Therefore, if two or more registrars support the same algorithm, then the earlier registered one will be used.
-
If no matching registrar was found, then the default security provider is used. If none set, the JCE defaults are invoked. The default security provider can be configured either via the
org.apache.sshd.security.defaultProvider
system property or by programmatically invokingSecurityUtils#setDefaultProviderChoice
. Note: if the system property option is used, then it is assumed to contain a security provider's name (rather than itsProvider
class name...). -
If programmatic selection of the default security provider choice is required, then the code flow must ensure that
SecurityUtils#setDefaultProviderChoice
is called before any security entity (e.g., ciphers, keys, etc...) are required. Theoretically, one could change the choice after ciphers have been been requested but before keys were generated (e.g....), but it is dangerous and may yield unpredictable behavior.
See AbstractSecurityProviderRegistrar
helper class for a default implementation of most of the required functionality, as well as the existing implementations for BouncyCastle and EdDSA for examples of how to implement it. The most important issues to consider when adding such an implementation are:
-
Try using reflection API to detect the existence of the registered provider class and/or instantiate it. The main reason for this recommendation is that it isolates the code from a direct dependency on the provider's classes and makes class loading issue less likely.
-
Decide whether to use the provider's name or instance when creating security related entities such as ciphers, keys, etc... Note: the default preference is to use the provider name, thus registering via
Security.addProvider
call. In order to change that, either register the instance yourself or override theisNamedProviderUsed
method. In this context, cache the generatedProvider
instance if the instance rather than the name is used. Note: using only the provider instance instead of the name is a rather new feature and has not been fully tested. It is possible though to decide and use it anyway as long as it can be configurably disabled. -
The default implementation provides fine-grained control over the declared supported security entities - ciphers, signatures, key generators, etc... By default, it is done via consulting a system property composed of
org.apache.sshd.security.provider
, followed by the security provider name and the relevant security entity - e.g.,org.apache.sshd.security.provider.BC.KeyFactory
is assumed to contain a comma-separated list of supportedKeyFactory
algorithms.
Note:
-
The same naming convention can be used to enable/disable the registrar - even if supported - e.g.,
org.apache.sshd.security.provider.BC.enabled=false
disables the BouncyCastle registrar. -
One can use
all
or*
to specify that all entities of the specified type are supported - e.g.,org.apache.sshd.security.provider.BC.MessageDigest=all
. In this context, one can override thegetDefaultSecurityEntitySupportValue
method if no fine-grained configuration is required per-entity type, -
The result of an
isXxxSupported
call is/should be cached (seeAbstractSecurityProviderRegistrar
). -
For ease of implementation, all support query calls are routed to the
isSecurityEntitySupported
method so that one can concentrate all the configuration in a single method. This is done for convenience reasons - the code will invoke the correct support query as per the type of entity it needs. E.g., if it needs a cipher, it will invokeisCipherSupported
- which by default will invokeisSecurityEntitySupported
with theCipher
class as its argument. -
Specifically for ciphers the argument to the support query contains a transformation (e.g.,
AES/CBC/NoPadding
) so one should take that into account when parsing the input argument to decide which cipher is referenced - seeSecurityProviderRegistrar.getEffectiveSecurityEntityName(Class<?>, String)
helper method
This interface is used to provide "file"-related services - e.g., SCP and SFTP - although it can be used for remote command execution
as well (see the section about commands and the Aware
interfaces). The default implementation is a NativeFileSystemFactory
that simply exposes the FileSystems.getDefault()
result. However, for "sandboxed" implementations one can use the VirtualFileSystemFactory
. This implementation provides a way for
deciding what is the logged-in user's file system view and then use a RootedFileSystemProvider
in order to provide a "sandboxed"
file system where the logged-in user can access only the files under the specified root and no others.
SshServer sshd = SshServer.setupDefaultServer();
sshd.setFileSystemFactory(new VirtualFileSystemFactory() {
@Override
protected Path computeRootDir(Session session) throws IOException {
String username = session.getUsername(); // or any other session related parameter
Path path = resolveUserHome(username);
return path;
}
});
The usage of a FileSystemFactory
is not limited though to the server only - the ScpClient
implementation also uses
it in order to retrieve the local path for upload/download-ing files/folders. This means that the client side can also
be tailored to present different views for different clients
The framework requires from time to time spawning some threads in order to function correctly - e.g., commands, SFTP subsystem,
port forwarding (among others) require such support. By default, the framework will allocate an ExecutorService for each specific purpose and then shut it down when the module has completed its work - e.g., session
was closed. Users may provide their own ExecutorService
(s) instead of the internally auto-allocated ones - e.g., in
order to control the max. spawned threads, stack size, track threads, etc... If this is done, then one must also provide
the shutdownOnExit
value indicating to the overridden module whether to shut down the service once it is no longer necessary.
/*
* An example for SFTP - there are other such locations. By default,
* the SftpSubsystem implementation creates a single-threaded executor
* for each session, uses it to spawn the SFTP command handler and shuts
* it down when the command is destroyed
*/
SftpSubsystemFactory factory = new SftpSubsystemFactory.Builder()
.withExecutorService(mySuperDuperExecutorService)
.withShutdownOnExit(false) // I will take care of shutting it down
.build();
SshServer sshd = SshServer.setupDefaultServer();
sshd.setSubsystemFactories(Collections.<NamedFactory<Command>>singletonList(factory));
All command execution - be it shell or single command - boils down to a Command
instance being created, initialized and then
started. In this context, it is crucial to notice that the command's start()
method implementation must spawn a new thread - even
for the simplest or most trivial command. Any attempt to communicate via the established session will most likely fail since
the packets processing thread may be blocked by this call. Note: one might get away with executing some command in the
context of the thread that called the start()
method, but it is extremely dangerous and should not be attempted.
The command execution code can communicate with the peer client via the input/output/error streams that are provided as
part of the command initialization process. Once the command is done, it should call the ExitCallback#onExit
method to indicate
that it has finished. The framework will then take care of propagating the exit code, closing the session and (eventually) destroy()
-ing
the command. Note: the command may not assume that it is done until its destroy()
method is called - i.e., it should not
release or null-ify any of its internal state even if onExit()
was called.
Upon calling the onExit
method the code sends an SSH_MSG_CHANNEL_EOF message, and the provided result status code
is sent as an exit-status
message as described in RFC4254 - section 6.10.
The provided message is simply logged at DEBUG level.
// A simple command implementation example
class MyCommand implements Command, Runnable {
private InputStream in;
private OutputStream out, err;
private ExitCallback callback;
public MyCommand() {
super();
}
@Override
public void setInputStream(InputStream in) {
this.in = in;
}
@Override
public void setOutputStream(OutputStream out) {
this.out = out;
}
@Override
public void setErrorStream(OutputStream err) {
this.err = err;
}
@Override
public void setExitCallback(ExitCallback callback) {
this.callback = callback;
}
@Override
public void start(Environment env) throws IOException {
spawnHandlerThread(this);
}
@Override
public void run() {
while(true) {
try {
String cmd = readCommand(in);
if ("exit".equals(cmd)) {
break;
}
handleCommand(cmd, out);
} catch (Exception e) {
writeError(err, e);
callback.onExit(-1, e.getMessage());
return;
}
callback.onExit(0);
}
}
Once created, the Command
instance is checked to see if it implements one of the Aware
interfaces that enables
injecting some dynamic data before the command is start()
-ed.
-
SessionAware
- Injects theSession
instance through which the command request was received. -
ChannelSessionAware
- Injects theChannelSession
instance through which the command request was received. -
FileSystemAware
- Injects the result of consulting theFileSystemFactory
as to the FileSystem associated with this command.
Some commands may send/receive large amounts of data over their STDIN/STDOUT/STDERR streams. Since (by default) the sending mechanism in SSHD is
asynchronous it may cause Out of memory errors due to one side (client/server) generating SSH_MSG_CHANNEL_DATA
or SSH_MSG_CHANNEL_EXTENDED_DATA
at a much higher rate than the other side can consume. This leads to a build-up of a packets backlog that eventually consumes all available memory
(as described in SSHD-754 and SSHD-768). As of
version 1.7 one can register a ChannelStreamPacketWriterResolver
at the client/server/session/channel level that can enable the user to replace
the raw channel with some throttling mechanism that will be used for stream packets. Such an (experimental) example is the ThrottlingPacketWriter
available in the sshd-contrib
module. Note: if the ChannelStreamPacketWriterResolver
returns a wrapper instance instead of a Channel
then
it will be closed automatically when the stream using it is closed.
Besides the ScpTransferEventListener
, the SCP module also uses a ScpFileOpener
instance in order to access
the local files - client or server-side. The default implementation simply opens an InputStream
or OutputStream on the requested local path. However,
the user may replace it and intercept the calls - e.g., for logging, for wrapping/filtering the streams, etc... Note:
due to SCP protocol limitations one cannot change the size of the input/output since it is passed as part of the command
before the file opener is invoked - so there are a few limitations on what one can do within this interface implementation.
Both client-side and server-side SFTP are supported. Starting from SSHD 1.8.0, the SFTP related code is located in the sshd-sftp
, so you need to add this additional dependency to your maven project:
<dependency>
<groupId>org.apache.sshd</groupId>
<artifactId>sshd-sftp</artifactId>
<version>...same as sshd-core...</version>
</dependency>
On the server side, the following code needs to be added:
SftpSubsystemFactory factory = new SftpSubsystemFactory.Builder()
.build();
server.setSubsystemFactories(Collections.singletonList(factory));
SftpClient client = SftpClientFactory.instance().createSftpClient(session);
See above...
In addition to the SftpEventListener
there are a few more SFTP-related special interfaces and modules.
The SFTP subsystem code supports versions 3-6 (inclusive), and by default attempts to negotiate the highest possible one - on both client and server code. The user can intervene and force a specific version or a narrower range.
SftpVersionSelector myVersionSelector = new SftpVersionSelector() {
@Override
public int selectVersion(ClientSession session, int current, List<Integer> available) {
int selectedVersion = ...run some logic to decide...;
return selectedVersion;
}
};
try (ClientSession session = client.connect(user, host, port).verify(timeout).getSession()) {
session.addPasswordIdentity(password);
session.auth.verify(timeout);
try (SftpClient sftp = SftpClientFactory.instance().createSftpClient(session, myVersionSelector)) {
... do SFTP related stuff...
}
}
On the server side, version selection restriction is more complex - please remember that the client chooses
the version, and all we can do at the server is require a specific version via the SftpSubsystem#SFTP_VERSION
configuration key. For more advanced restrictions one needs to sub-class SftpSubSystem
and provide a non-default
SftpSubsystemFactory
that uses the sub-classed code.
The code creates SftpClient
-s and SftpFileSystem
-s using a default built-in SftpClientFactory
instance (see
DefaultSftpClientFactory
). Users may choose to use a custom factory in order to provide their own
implementations - e.g., in order to override some default behavior - e.g.:
SshClient client = ... setup client...
try (ClientSession session = client.connect(user, host, port).verify(timeout).getSession()) {
session.addPasswordIdentity(password);
session.auth.verify(timeout);
// User-specific factory
try (SftpClient sftp = MySpecialSessionSftpClientFactory.INSTANCE.createSftpClient(session)) {
... instance created through SpecialSessionSftpClientFactory ...
}
}
The code automatically registers the SftpFileSystemProvider
as the handler for sftp://
URL(s). Such URLs are
interpreted as remote file locations and automatically exposed to the user as Path
objects. In effect, this allows the code to "mount" a remote directory via SFTP and treat it as if it were local using
standard java.nio calls like any "ordinary" file
system.
// Direct URI
Path remotePath = Paths.get(new URI("sftp://user:password@host/some/remote/path"));
// Releasing the file-system once no longer necessary
try (FileSystem fs = remotePath.getFileSystem()) {
... work with the remote path...
}
// "Mounting" a file system
URI uri = SftpFileSystemProvider.createFileSystemURI(host, port, username, password);
try (FileSystem fs = FileSystems.newFileSystem(uri, Collections.<String, Object>emptyMap())) {
Path remotePath = fs.getPath("/some/remote/path");
...
}
// Full programmatic control
SshClient client = ...setup and start the SshClient instance...
SftpFileSystemProvider provider = new SftpFileSystemProvider(client);
URI uri = SftpFileSystemProvider.createFileSystemURI(host, port, username, password);
try (FileSystem fs = provider.newFileSystem(uri, Collections.<String, Object>emptyMap())) {
Path remotePath = fs.getPath("/some/remote/path");
}
The obtained Path
instance can be used in exactly the same way as any other "regular" one:
try (InputStream input = Files.newInputStream(remotePath)) {
...read from remote file...
}
try (DirectoryStream<Path> ds = Files.newDirectoryStream(remoteDir)) {
for (Path remoteFile : ds) {
if (Files.isRegularFile(remoteFile)) {
System.out.println("Delete " + remoteFile + " size=" + Files.size(remoteFile));
Files.delete(remoteFile);
} else if (Files.isDirectory(remoteFile)) {
System.out.println(remoteFile + " - directory");
}
}
}
It is highly recommended to close()
the mounted file system once no longer necessary in order to release the
associated SFTP session sooner rather than later - e.g., via a try-with-resource
code block.
When "mounting" a new file system one can provide configuration parameters using either the
environment map in the FileSystems#newFileSystem
method or via the URI query parameters. See the SftpFileSystemProvider
for the available
configuration keys and values.
// Using explicit parameters
Map<String, Object> params = new HashMap<>();
params.put("param1", value1);
params.put("param2", value2);
...etc...
URI uri = SftpFileSystemProvider.createFileSystemURI(host, port, username, password);
try (FileSystem fs = FileSystems.newFileSystem(uri, params)) {
Path remotePath = fs.getPath("/some/remote/path");
... work with the remote path...
}
// Using URI parameters
Path remotePath = Paths.get(new URI("sftp://user:password@host/some/remote/path?param1=value1¶m2=value2..."));
// Releasing the file-system once no longer necessary
try (FileSystem fs = remotePath.getFileSystem()) {
... work with the remote path...
}
Note: if both options are used then the URI parameters override the environment ones
Map<String, Object> params = new HashMap<>();
params.put("param1", value1);
params.put("param2", value2);
// The value of 'param1' is overridden in the URI
try (FileSystem fs = FileSystems.newFileSystem(new URI("sftp://user:password@host/some/remote/path?param1=otherValue1", params)) {
Path remotePath = fs.getPath("/some/remote/path");
... work with the remote path...
}
One can override the default SftpFileSystemAccessor
and thus be able to track all opened files and folders
throughout the SFTP server subsystem code. The accessor is registered/overwritten in via the SftpSubSystemFactory
:
SftpSubsystemFactory factory = new SftpSubsystemFactory.Builder()
.withFileSystemAccessor(new MySftpFileSystemAccessor())
.build();
server.setSubsystemFactories(Collections.singletonList(factory));
By default, the SFTP client uses UTF-8 to encode/decode any referenced file/folder name. However, some servers do not properly encode such names,
and thus the "visible" names by the client become corrupted, or even worse - cause an exception upon decoding attempt. The SftpClient
exposes
a get/setNameDecodingCharset
method which enables the user to modify the charset - even while the SFTP session is in progress - e.g.:
try (SftpClient client = ...obtain an instance...) {
client.setNameDecodingCharset(Charset.forName("ISO-8859-8"));
for (DirEntry entry : client.readDir(...some path...)) {
...handle entry assuming ISO-8859-8 encoded names...
}
client.setNameDecodingCharset(Charset.forName("ISO-8859-4"));
for (DirEntry entry : client.readDir(...some other path...)) {
...handle entry assuming ISO-8859-4 encoded names...
}
}
The initial charset can be pre-configured on the client/session by using the sftp-name-decoding-charset
property - if none specified then
UTF-8 is used. Note: the value can be a charset name or a java.nio.charset.Charset
instance - e.g.:
SshClient client = ... setup/obtain an instance...
// default for ALL SFTP clients obtained through this client
PropertyResolverUtils.updateProperty(client, SftpClient.NAME_DECODING_CHARSET, "ISO-8859-8");
try (ClientSession session = client.connect(...)) {
// default for ALL SFTP clients obtained through the session - overrides client setting
PropertyResolverUtils.updateProperty(session, SftpClient.NAME_DECODING_CHARSET, "ISO-8859-4");
session.authenticate(...);
try (SftpClient sftp = SftpClientFactory.instance().createSftpClient(session)) {
for (DirEntry entry : sftp.readDir(...some path...)) {
...handle entry assuming ISO-8859-4 (inherited from the session) encoded names...
}
// override the inherited default from the session
sftp.setNameDecodingCharset(Charset.forName("ISO-8859-1"));
for (DirEntry entry : sftp.readDir(...some other path...)) {
...handle entry assuming ISO-8859-1 encoded names...
}
}
}
Both client and server support several of the SFTP extensions specified in various drafts:
supported
- DRAFT 05 - section 4.4supported2
- DRAFT 13 section 5.4versions
- DRAFT 09 Section 4.6vendor-id
- DRAFT 09 - section 4.4acl-supported
- DRAFT 11 - section 5.4newline
- DRAFT 09 Section 4.3md5-hash
,md5-hash-handle
- DRAFT 09 - section 9.1.1check-file-handle
,check-file-name
- DRAFT 09 - section 9.1.2copy-file
,copy-data
- DRAFT 00 - sections 6, 7space-available
- DRAFT 09 - section 9.3
Furthermore several OpenSSH SFTP extensions are also supported:
fsync@openssh.com
fstatvfs@openssh.com
hardlink@openssh.com
posix-rename@openssh.com
statvfs@openssh.com
On the server side, the reported standard extensions are configured via the SftpSubsystem.CLIENT_EXTENSIONS_PROP
configuration key, and the OpenSSH ones via the SftpSubsystem.OPENSSH_EXTENSIONS_PROP
.
On the client side, all the supported extensions are classes that implement SftpClientExtension
. These classes can be used to query the client whether the remote server supports the specific extension and then obtain a parser for its contents. Users can easily add support for more extensions in a similar manner as the existing ones by implementing an appropriate ExtensionParser
and then registering it at the ParserUtils
- see the existing ones for details how this can be achieved.
// properietary/special extension parser
ParserUtils.registerExtension(new MySpecialExtension());
try (ClientSession session = client.connect(username, host, port).verify(timeout).getSession()) {
session.addPasswordIdentity(password);
session.auth().verify(timeout);
try (SftpClient sftp = SftpClientFactory.instance().createSftpClient(session)) {
Map<String, byte[]> extensions = sftp.getServerExtensions();
// Key=extension name, value=registered parser instance
Map<String, ?> data = ParserUtils.parse(extensions);
for (Map.Entry<String, ?> de : data.entrySet()) {
String extName = de.getKey();
Object extValue = de.getValue();
if (SftpConstants.EXT_ACL_SUPPORTED.equalsIgnoreCase(extName)) {
AclCapabilities capabilities = (AclCapabilities) extValue;
...see what other information can be gleaned from it...
} else if ((SftpConstants.EXT_VERSIONS.equalsIgnoreCase(extName)) {
Versions versions = (Versions) extValue;
...see what other information can be gleaned from it...
} else if ("my-special-extension".equalsIgnoreCase(extName)) {
MySpecialExtension special = (MySpecialExtension) extValue;
...see what other information can be gleaned from it...
} // ...etc....
}
}
}
One can skip all the conditional code if a specific known extension is required:
try (ClientSession session = client.connect(username, host, port).verify(timeout).getSession()) {
session.addPasswordIdentity(password);
session.auth().verify(timeout);
try (SftpClient sftp = SftpClientFactory.instance().createSftpClient(session)) {
// Returns null if extension is not supported by remote server
SpaceAvailableExtension space = sftp.getExtension(SpaceAvailableExtension.class);
if (space != null) {
...use it...
}
}
}
If an exception is thrown during processing of an SFTP command, then the exception is translated into a SSH_FXP_STATUS
message
using a registered SftpErrorStatusDataHandler
. The default implementation provides a short description of the failure based on the thrown
exception type. However, users may override it when creating the SftpSubsystemFactory
and provide their own codes and/or messages - e.g., for debugging one can register a DetailedSftpErrorStatusDataHandler
(see sshd-contrib
) that "leaks" more information in the generated message.
Port forwarding as specified in RFC 4254 - section 7 is fully supported by the client and server. From the client side, this capability is exposed via the start/stopLocal/RemotePortForwarding
method. The key player in this capability is the configured ForwardingFilter
that controls this feature - on both sides - client and server. By default, this capability is disabled - i.e., the user must provide an implementation and call the appropriate setForwardingFilter
method on the client/server.
The code contains 2 simple implementations - an accept-all and a reject-all one that can be used for these trivial policies. Note: setting a null filter is equivalent to rejecting all such attempts.
The code implements a SOCKS proxy for versions 4 and 5. The proxy capability is invoked via the start/stopDynamicPortForwarding
methods.
The code provides to some extent an SSH proxy agent via the available SshAgentFactory
implementations. As of latest version both Secure Shell Authentication Agent Protocol Draft 02 and its OpenSSH equivalent are supported. Note: in order to support this feature the
Apache Portable Runtime Library needs to be added to the Maven dependencies:
<dependency>
<groupId>tomcat</groupId>
<artifactId>tomcat-apr</artifactId>
</dependency>
Note: Since the portable runtime library uses native code, one needs to also make sure that the appropriate .dll/.so library is available in the LD_LIBRARY_PATH.
The code's behavior is highly customizable not only via non-default implementations of interfaces but also as far as the parameters that govern its behavior - e.g., timeouts, min./max. values, allocated memory size, etc... All the customization related code flow implements a hierarchical PropertyResolver
inheritance model where the "closest" entity is consulted first, and then its "owner", and so on until the required value is found. If the entire hierarchy yielded no specific result, then some pre-configured default is used. E.g., if a channel requires some parameter in order to decide how to behave, then the following configuration hierarchy is consulted:
- The channel-specific configuration
- The "owning" session configuration
- The "owning" client/server instance configuration
- The system properties - Note: any configuration value required by the code can be provided via a system property bearing the
org.apache.sshd.config
prefix - seeSyspropsMapWrapper
for the implementation details.
As previously mentioned, this hierarchical lookup model is not limited to "simple" configuration values (strings, integers, etc.), but used also for interfaces/implementations such as cipher/MAC/compression/authentication/etc. factories - the exception being that the system properties are not consulted in such a case. This code behavior provides highly customizable fine-grained/targeted control of the code's behavior - e.g., one could impose usage of specific ciphers/authentication methods/etc. or present different public key "identities"/welcome banner behavior/etc., based on address, username or whatever other decision parameter is deemed relevant by the user's code. This can be done on both sides of the connection - client or server. E.g., the client could present different keys based on the server's address/identity string/welcome banner, or the server could accept only specific types of authentication methods based on the client's address/username/etc... This can be done in conjunction with the usage of the various EventListener
-s provided by the code (see below).
One of the code locations where this behavior can be leveraged is when the server provides file-based services (SCP, SFTP) in order to provide a different/limited view of the available files based on the username - see the section dealing with FileSystemFactory
-ies.
According to RFC 4252 - section 5.4 the server may send a welcome banner message during the authentication process. Both the message contents and the phase at which it is sent can be configured/customized.
The welcome banner contents are controlled by the ServerAuthenticationManager.WELCOME_BANNER
configuration key - there are several possible values for this key:
-
A simple string - in which case its contents are the welcome banner.
-
A file URI - or a string starting with
"file:/"
followed by the file path - see below. -
A URL - or a string containing "://" - in which case the URL#openStream() method is invoked and its contents are read.
-
A File or a Path - in this case, the file's contents are re-loaded every time it is required and sent as the banner contents.
-
The special value
ServerAuthenticationManager.AUTO_WELCOME_BANNER_VALUE
which generates a combined "random art" of all the server's keys as described inPerrig A.
andSong D.
-s article Hash Visualization: a New Technique to improve Real-World Security - International Workshop on Cryptographic Techniques and E-Commerce (CrypTEC '99) -
One can also override the
ServerUserAuthService#resolveWelcomeBanner
method and use whatever other content customization one sees fit.
Note:
-
If any of the sources yields an empty string or is missing (in the case of a resource) then no welcome banner message is sent.
-
If the banner is loaded from a file or URL resource, then one can configure the Charset used to convert the file's contents into a string via the
ServerAuthenticationManager.WELCOME_BANNER_CHARSET
configuration key (default=UTF-8
). -
In this context, see also the
ServerAuthenticationManager.WELCOME_BANNER_LANGUAGE
configuration key - which provides control over the declared language tag, although most clients seem to ignore it.
According to RFC 4252 - section 5.4:
The SSH server may send an SSH_MSG_USERAUTH_BANNER message at any time after this authentication protocol starts and before authentication is successful.
The code contains a WelcomeBannerPhase
enumeration that can be used to configure via the ServerAuthenticationManager.WELCOME_BANNER_PHASE
configuration key the authentication phase at which the welcome banner is sent (see also the ServerAuthenticationManager.DEFAULT_BANNER_PHASE
value). In this context, note that if the NEVER
phase is configured, no banner will be sent even if one has been configured via one of the methods mentioned previously.
This interface provides the ability to intervene during the connection and authentication phases and "re-write" the user's original parameters. The DefaultConfigFileHostEntryResolver
instance used to set up the default client instance follows the SSH config file standards, but the interface can be replaced so as to implement whatever proprietary logic is required.
SshClient client = SshClient.setupDefaultClient();
client.setHostConfigEntryResolver(new MyHostConfigEntryResolver());
client.start();
/*
* The resolver might decide to connect to some host2/port2 using user2 and password2
* (or maybe using some key instead of the password).
*/
try (ClientSession session = client.connect(user1, host1, port1).verify(...timeout...).getSession()) {
session.addPasswordIdentity(...password1...);
session.auth().verify(...timeout...);
}
Can be used to read various standard SSH client or server configuration files and initialize the client/server respectively. Including (among other things), bind address, ciphers, signature, MAC(s), KEX protocols, compression, welcome banner, etc..
The code supports registering many types of event listeners that enable receiving notifications about important events as well as sometimes intervening in the way these events are handled. All listener interfaces extend SshdEventListener
so they can be easily detected and distinguished from other EventListener
(s).
In general, event listeners are cumulative - e.g., any channel event listeners registered on the SshClient/Server
are automatically added to all sessions, in addition to any such listeners registered on the Session
, as well as any specific listeners registered on a specific Channel
- e.g.,
// Any channel event will be signalled to ALL the registered listeners
sshClient/Server.addChannelListener(new Listener1());
sshClient/Server.addSessionListener(new SessionListener() {
@Override
public void sessionCreated(Session session) {
session.addChannelListener(new Listener2());
session.addChannelListener(new ChannelListener() {
@Override
public void channelInitialized(Channel channel) {
channel.addChannelListener(new Listener3());
}
});
}
});
Informs about session related events. One can modify the session - although the modification effect depends on the session's state. E.g., if one changes the ciphers after the key exchange (KEX) phase, then they will take effect only if the keys are re-negotiated. It is important to read the documentation very carefully and understand at which stage each listener method is invoked and what are the repercussions of changes at that stage. In this context, it is worth mentioning that one can attach to sessions arbitrary attributes that can be retrieved by the user's code later on:
public static final AttributeKey<String> STR_KEY = new AttributeKey<>();
public static final AttributeKey<Long> LONG_KEY = new AttributeKey<>();
sshClient/Server.addSessionListener(new SessionListener() {
@Override
public void sessionCreated(Session session) {
session.setAttribute(STR_KEY, "Some string value");
session.setAttribute(LONG_KEY, 3777347L);
// ...etc...
}
@Override
public void sessionClosed(Session session) {
String str = session.getAttribute(STR_KEY);
Long l = session.getAttribute(LONG_KEY);
// ... do something with the retrieved attributes ...
}
});
Informs about channel related events - as with sessions, once can influence the channel to some extent, depending on the channel's state. The ability to influence channels is much more limited than sessions. In this context, it is worth mentioning that one can attach to channels arbitrary attributes that can be retrieved by the user's code later on - same was as it is done for sessions.
Invoked whenever a message intended for an unknown channel is received. By default, the code ignores the vast majority of such messages and logs them at DEBUG level. For a select few types of messages the code generates an SSH_CHANNEL_MSG_FAILURE
packet that is sent to the peer session - see DefaultUnknownChannelReferenceHandler
implementation. The user may register handlers at any level - client/server, session and/or connection service - the one registered "closest" to connection service will be used.
Informs about signal requests as described in RFC 4254 - section 6.9, break requests (sent as SIGINT) as described in RFC 4335 and "window-change" (sent as SIGWINCH) requests as described in RFC 4254 - section 6.7
Provides information about major SFTP protocol events. The listener is registered at the SftpSubsystemFactory
:
SftpSubsystemFactory factory = new SftpSubsystemFactory();
factory.addSftpEventListener(new MySftpEventListener());
sshd.setSubsystemFactories(Collections.<NamedFactory<Command>>singletonList(factory));
Informs and allows tracking of port forwarding events as described in RFC 4254 - section 7 as well as the (simple) SOCKS protocol (versions 4, 5). In this context, one can create a PortForwardingTracker
that can be used in a try-with-resource
block so that the set up forwarding is automatically torn down when the tracker is close()
-d:
try (ClientSession session = client.connect(user, host, port).verify(...timeout...).getSession()) {
session.addPasswordIdentity(password);
session.auth().verify(...timeout...);
try (PortForwardingTracker tracker = session.createLocal/RemotePortForwardingTracker(...)) {
...do something that requires the tunnel...
}
// Tunnel is torn down when code reaches this point
}
Inform about SCP related events. ScpTransferEventListener
(s) can be registered on both client and server side:
// Server side
ScpCommandFactory factory = new ScpCommandFactory(...with/out delegate..);
factory.addEventListener(new MyServerSideScpTransferEventListener());
sshd.setCommandFactory(factory);
// Client side
try (ClientSession session = client.connect(user, host, port).verify(...timeout...).getSession()) {
session.addPasswordIdentity(password);
session.auth().verify(...timeout...);
ScpClient scp = session.createScpClient(new MyClientSideScpTransferEventListener());
...scp.upload/download...
}
The implementation can be used to intercept and process the SSH_MSG_IGNORE, SSH_MSG_DEBUG and SSH_MSG_UNIMPLEMENTED messages. The handler can be registered on either side - server or client, as well as on the session. A special patch has been introduced that automatically ignores such messages if they are malformed - i.e., they never reach the handler.
RFC 4253 - section 9 recommends re-exchanging keys every once in a while
based on the amount of traffic and the selected cipher - the matter is further clarified in RFC 4251 - section 9.3.2. These recommendations are mirrored in the code via the FactoryManager
related REKEY_TIME_LIMIT
, REKEY_PACKETS_LIMIT
and REKEY_BLOCKS_LIMIT
configuration properties that
can be used to configure said behavior - please be sure to read the relevant Javadoc as well as the aforementioned RFC section(s) when
manipulating them. This behavior can also be controlled programmatically by overriding the AbstractSession#isRekeyRequired()
method.
As an added security mechanism RFC 4251 - section 9.3.1 recommends adding
"spurious" SSH_MSG_IGNORE messages. This functionality is mirrored in the
FactoryManager
related IGNORE_MESSAGE_FREQUENCY
, IGNORE_MESSAGE_VARIANCE
and IGNORE_MESSAGE_SIZE
configuration properties that can be used to configure said behavior - please be sure to read the relevant Javadoc as well as the aforementioned RFC section when manipulating them. This behavior can also be controlled programmatically by overriding the AbstractSession#resolveIgnoreBufferDataLength()
method.
// client side
SshClient client = SshClient.setupDefaultClient();
// This is the default for ALL sessions unless specifically overridden
client.setReservedSessionMessagesHandler(new MyClientSideReservedSessionMessagesHandler());
// Adding it via a session listener
client.setSessionListener(new SessionListener() {
@Override
public void sessionCreated(Session session) {
// Overrides the one set at the client level.
if (isSomeSessionOfInterest(session)) {
session.setReservedSessionMessagesHandler(new MyClientSessionReservedSessionMessagesHandler(session));
}
}
});
try (ClientSession session = client.connect(user, host, port).verify(...timeout...).getSession()) {
// setting it explicitly
session.setReservedSessionMessagesHandler(new MyOtherClientSessionReservedSessionMessagesHandler(session));
session.addPasswordIdentity(password);
session.auth().verify(...timeout...);
...use the session...
}
// server side
SshServer server = SshServer.setupDefaultServer();
// This is the default for ALL sessions unless specifically overridden
server.setReservedSessionMessagesHandler(new MyServerSideReservedSessionMessagesHandler());
// Adding it via a session listener
server.setSessionListener(new SessionListener() {
@Override
public void sessionCreated(Session session) {
// Overrides the one set at the server level.
if (isSomeSessionOfInterest(session)) {
session.setReservedSessionMessagesHandler(new MyServerSessionReservedSessionMessagesHandler(session));
}
}
});
NOTE: Unlike "regular" event listeners, the handler is not cumulative - i.e., setting it overrides the previous instance
rather than being accumulated. However, one can use the EventListenerUtils
and create a cumulative listener - see how
SessionListener
or ChannelListener
proxies were implemented.
The code supports both global and channel-specific requests via the registration of RequestHandler
(s).
The global handlers are derived from ConnectionServiceRequestHandler
(s) whereas the channel-specific
ones are derived from ChannelRequestHandler
(s). In order to add a handler one need only register the correct
implementation and handle the request when it is detected. For global request handlers this is done by registering
them on the server:
// NOTE: the following code can be employed on BOTH client and server - the example is for the server
SshServer server = SshServer.setUpDefaultServer();
List<RequestHandler<ConnectionService>> oldGlobals = server.getGlobalRequestHandlers();
// Create a copy in case current one is null/empty/un-modifiable
List<RequestHandler<ConnectionService>> newGlobals = new ArrayList<>();
if (GenericUtils.size(oldGlobals) > 0) {
newGlobals.addAll(oldGLobals);
}
newGlobals.add(new MyGlobalRequestHandler());
server.setGlobalRequestHandlers(newGlobals);
For channel-specific requests, one uses the channel's add/removeRequestHandler
method to manage its handlers. The way request handlers are invoked when a global/channel-specific request is received is as follows:
-
All currently registered handlers'
process
method is invoked with the request type string parameter (among others). The implementation should examine the request parameters and decide whether it is able to process it. -
If the handler returns
Result.Unsupported
then the next registered handler is invoked. In other words, processing stops at the first handler that returned a valid response. Thus the importance of theList<RequestHandler<...>>
that defines the order in which the handlers are invoked. Note: while it is possible to register multiple handlers for the same request and rely on their order, it is highly recommended to avoid this situation as it makes debugging the code and diagnosing problems much more difficult. -
If no handler reported a valid result value then a failure message is sent back to the peer. Otherwise, the returned result is translated into the appropriate success/failure response (if the sender asked for a response). In this context, the handler may choose to build and send the response within its own code, in which case it should return the
Result.Replied
value indicating that it has done so.
public class MySpecialChannelRequestHandler implements ChannelRequestHandler {
...
@Override
public Result process(Channel channel, String request, boolean wantReply, Buffer buffer) throws Exception {
if (!"my-special-request".equals(request)) {
return Result.Unsupported; // Not mine - maybe someone else can handle it
}
...handle the request - can read more parameters from the message buffer...
return Result.ReplySuccess/Failure/Replied; // signal processing result
}
}
-
exit-signal
,exit-status
- As described in RFC4254 section 6.10 -
*@putty.projects.tartarus.org
- As described in Appendix F: SSH-2 names specified for PuTTY -
hostkeys-prove-00@openssh.com
,hostkeys-00@openssh.com
- As described in OpenSSH protocol - section 2.5 -
tcpip-forward
,cancel-tcpip-forward
- As described in RFC4254 section 7 -
keepalive@*
- Used by many implementations (including this one) to "ping" the peer and make sure the connection is still alive. In this context, the SSHD code allows the user to configure both the frequency and content of the heartbeat request (including whether to send this request at all) via theClientFactoryManager
-sHEARTBEAT_INTERVAL
,HEARTBEAT_REQUEST
andDEFAULT_KEEP_ALIVE_HEARTBEAT_STRING
configuration properties. -
no-more-sessions@*
- As described in OpenSSH protocol section 2.2. In this context, the code consults theServerFactoryManagder.MAX_CONCURRENT_SESSIONS
server-side configuration property in order to decide whether to accept a successfully authenticated session.
There are several extension modules available - specifically, the sshd-contrib module contains some of them. Note: the module contains experimental code that may find its way some time in the future to a standard artifact. It is also subject to changes and/or deletion without any prior announcement. Therefore, any code that relies on it should also store a copy of the sources in case the classes it used it are modified or deleted.
The apache-sshd.zip distribution provides Windows/Linux
scripts that use the MINA SSHD code base to implement the common ssh, scp, sftp commands. The clients accept most useful switches from the original commands they mimic, where the -o Option=Value
arguments can be used to configure the client/server in addition to the system properties mechanism. For more details, consult the main methods code in the respective SshClientMain
, SftpCommandMain
and ScpClientMain
classes. The code also includes SshKeyScanMain
that is a simple implementation for ssh-keyscan(1).
The distribution also includes also an sshd script that can be used to launch a server instance - see SshServerMain#main
for activation command line arguments and options.
In order to use this CLI code as part of another project, one needs to include the sshd-cli module:
<dependency>
<groupId>org.apache.sshd</groupId>
<artifactId>sshd-cli</artifactId>
<version>...same version as the core...</version>
</dependency>
- SftpCommandMain - by default uses an internal
SftpClientFactory
. This can be overridden as follows:
-
Provide a
-o SftpClientFactory=XXX
command line argument where the option specifies the fully-qualified name of the class that implements this interface. -
Add a
META-INF\services\org.apache.sshd.client.subsystem.sftp.SftpClientFactory
file containing the fully-qualified name of the class that implements this interface. Note: if more than one such instance is detected an exception is thrown.
Note: The specified class(es) must be public and contain a public no-args constructor.
-
Port - by default the SSH server sets up to list on port 8000 in order to avoid conflicts with any running SSH O/S daemon. This can be modified by providing a
-p NNNN
or-o Port=NNNN
command line option. -
Subsystem(s) - the server automatically detects subsystems using the Java ServiceLoader mechanism. This can be overwritten as follows (in this order):
-
Provide a
org.apache.sshd.server.subsystem.SubsystemFactory
system property containing comma-separated fully-qualified names of classes implementing this interface. The implementations must be public and have a public no-args constructor for instantiating them. The order of the provided subsystems will be according to their order in the specified list. -
Provide a
-o Subsystem=xxx,yyy
command line argument where value is a comma-separated list of the name(s) of the auto-detected factories via theServiceLoader
mechanism. The special valuenone
may be used to indicate that no subsystem is to be configured. Note: no specific order is provided when subsystems are auto-detected and/or filtered.
The sshd-git artifact contains both client and server-side command factories for issuing and handling some git commands. The code is based on JGit and iteracts with it smoothly.
This module provides SSHD-based replacements for the SSH and SFTP transports used by the JGIT client - see GitSshdSessionFactory
- it can be used as a drop-in replacement
for the JSCH based built-in session factory provided by jgit. In this context, it is worth noting that the GitSshdSessionFactory
has been tailored so as to provide
flexible control over which SshClient
instance to use, and even which ClientSession
. The default instance allocates a new client every time a new GitSshdSession
is created - which is
started and stopped as necessary. However, this can be pretty wasteful, so if one intends to issue several commands that access GIT repositories via SSH, one should maintain a single
client instance and re-use it:
SshClient client = ...create and setup the client...
try {
client.start();
GitSshdSessionFactory sshdFactory = new GitSshdSessionFactory(client); // re-use the same client for all SSH sessions
org.eclipse.jgit.transport.SshSessionFactory.setInstance(sshdFactory); // replace the JSCH-based factory
... issue GIT commands that access remote repositories via SSH ....
} finally {
client.stop();
}
See GitPackCommandFactory
and GitPgmCommandFactory
- in order for the various commands to function correctly, they require a GitLocationResolver
that is invoked in order to allow the user to decide which is the correct GIT repository root location for a given command. The resolver is provided
with all the relevant details - including the command and server session through which the command was received:
GitLocationResolver resolver = (cmd, session, fs) -> ...consult some code - perhaps based on the authenticated username...
sshd.setCommandFactory(new GitPackCommandFactory().withGitLocationResolver(resolver));
These command factories also accept a delegate to which non-git commands are routed:
sshd.setCommandFactory(new GitPackCommandFactory()
.withDelegate(new MyCommandFactory())
.withGitLocationResolver(resolver));
// Here is how it looks if SCP is also requested
sshd.setCommandFactory(new GitPackCommandFactory()
.withDelegate(new ScpCommandFactory()
.withDelegate(new MyCommandFactory()))
.withGitLocationResolver(resolver));
// or
sshd.setCommandFactory(new ScpCommandFactory()
.withDelegate(new GitPackCommandFactory()
.withDelegate(new MyCommandFactory())
.withGitLocationResolver(resolver)));
// or any other combination ...
as with all other built-in commands, the factories allow the user to provide an ExecutorService
in order to control the spawned threads
for servicing the commands. If none provided, an internal single-threaded "pool" is created ad-hoc and destroyed once the command execution
is completed (regardless of whether successful or not):
sshd.setCommandFactory(new GitPackCommandFactory(resolver)
.withDelegate(new MyCommandFactory())
.withExecutorService(myService)
.withShutdownOnExit(false));
The sshd-ldap artifact contains an LdapPasswordAuthenticator and an LdapPublicKeyAuthenticator that have been written along the same lines as the openssh-ldap-publickey project. The authenticators can be easily configured to match most LDAP schemes, or alternatively serve as base classes for code that extends them and adds proprietary logic.
The code contains support for "wrapper" protocols such as PROXY or sslh. The idea is that one can register either a ClientProxyConnector
or ServerProxyAcceptor
and intercept the 1st packet being sent/received (respectively) before it reaches the SSHD code. This gives the programmer the capability to write a front-end that routes outgoing/incoming packets:
-
SshClient/ClientSesssion#setClientProxyConnector
- sets a proxy that intercepts the 1st packet before being sent to the server -
SshServer/ServerSession#setServerProxyAcceptor
- sets a proxy that intercept the 1st incoming packet before being processed by the server
-
PUTTY key file(s) readers - see
org.apache.sshd.common.config.keys.loader.putty
package - specificallyPuttyKeyUtils#DEFAULT_INSTANCE KeyPairResourceParser
. -
InteractivePasswordIdentityProvider
- helps implement aPasswordIdentityProvider
by delegating calls toUserInteraction#getUpdatedPassword
. The way to use it would be as follows:
try (ClientSession session = client.connect(login, host, port).await().getSession()) {
session.setUserInteraction(...); // this can also be set at the client level
PasswordIdentityProvider passwordIdentityProvider =
InteractivePasswordIdentityProvider.providerOf(session, "My prompt");
session.setPasswordIdentityProvider(passwordIdentityProvider);
session.auth.verify(...timeout...);
... continue with the authenticated session ...
}
or
UserInteraction ui = ....;
try (ClientSession session = client.connect(login, host, port).await().getSession()) {
PasswordIdentityProvider passwordIdentityProvider =
InteractivePasswordIdentityProvider.providerOf(session, ui, "My prompt");
session.setPasswordIdentityProvider(passwordIdentityProvider);
session.auth.verify(...timeout...);
... continue with the authenticated session ...
}
Note: UserInteraction#isInteractionAllowed
is consulted prior to invoking getUpdatedPassword
- if it returns false then password retrieval method is not invoked,
and it is assumed that no more passwords are available
-
SimpleAccessControlScpEventListener
- Provides a simple access control by making a distinction between methods that upload data and ones that download it via SCP. In order to use it, simply extend it and override itsisFileUpload/DownloadAllowed
methods -
SimpleAccessControlSftpEventListener
- Provides a simple access control by making a distinction between methods that provide SFTP file information - including reading data - and those that modify it -
ProxyProtocolAcceptor
- A working prototype to support the PROXY protocol as described in HAProxy Documentation -
ThrottlingPacketWriter
- An example of a way to overcome big window sizes when sending data - as described in SSHD-754 and SSHD-768
Below is the list of builtin components:
- Ciphers: aes128cbc, aes128ctr, aes192cbc, aes192ctr, aes256cbc, aes256ctr, arcfour128, arcfour256, blowfishcbc, tripledescbc
- Digests: md5, sha1, sha224, sha384, sha512
- Macs: hmacmd5, hmacmd596, hmacsha1, hmacsha196, hmacsha256, hmacsha512
- Key exchange: dhg1, dhg14, dhgex, dhgex256, ecdhp256, ecdhp384, ecdhp521
- Compressions: none, zlib, zlib@openssh.com
- Signatures/Keys: ssh-dss, ssh-rsa, nistp256, nistp384, nistp521, ed25519 (requires
eddsa
optional module)