info@remoteler.com 561.316.7343
How Remoteler Works

Secure Edge Access

Remoteler implements an encrypted reverse tunnel that can be used to create access routes to the resources available on unstable network connections, edge computing resources, and private networks, including resources behind the NAT, without the need to expose them publicly on the internet.

The basics

Suppose you manufacture small ARM-powered devices like network equipment or self-driving vehicles. Perhaps you deploy small server clusters on the edge. These devices will connect to the internet via an unreliable cellular network or a private network behind NAT. In this case, Teleport allows you to do the following:

  • Connect to remote devices via SSH as if they were located in your own cloud.
  • Connect to remote Kubernetes clusters as if they were located in your own cloud.
  • Connect to web applications running on remote devices using a web browser via HTTPS.

This approach is superior to distributed VPN technology because Remoteler is application-aware. Enforcing security on a higher level of the OSI model adheres to the principles of Zero Trust, where networks, including VPNs, are considered inherently untrustworthy. Being application-aware allows Remoteler to provide more flexibility for configuring role-based access control and implement rich audit logging.

Architecture

The underlying technology behind this is reverse tunnels. A reverse tunnel is a secure connection established by an edge site into a Remoteler cluster via the cluster’s proxy.

There are two types of reverse tunnels:

  • A reverse tunnel between a remote node and a Remoteler cluster.
  • A reverse tunnel between two Remoteler clusters. Such clusters are called Trusted Clusters.

Let’s look into each type in more detail.

Connecting Remote nodes

The diagram below shows the Remoteler cluster accessible via a proxy on proxy.example.com. This cluster has two regular nodes (A and B) and one remote node (R1).

From a user’s perspective, there is no difference between the nodes on the private network and the remote box node-R1. A user may have an Ansible script which pushes a code update simultaneously into all nodes, i.e. node-A, node-B and node-R1:

# login into the cluster:
$ tsh login --proxy=proxy.example.com

# SSH into node running on VPC/LAN:
$ ssh node-a

# SSH into a remote note:
$ ssh node-r1

The teleport daemon on regular nodes A and B is usually configured as a systemd unit and takes the following command-line arguments:

# local nodes need to know the address of the auth service on the private network
$ teleport start --auth=auth.example.com

But the teleport daemon on the remote device does not have access to auth.example.com, because it resolves to a local IP on the private network. The remote nodes must use the address of a proxy instead:

# remote nodes use the proxy address instead of the auth service:
$ teleport start --auth=proxy.example.com

Using the Remoteler Poxy Service address instead of the Remoteler Auth Service address instructs the proxy daemon to create a permanent reverse tunnel, through which future user connections will be proxied.

Why not connect all nodes as remote, then? You can, but reverse tunnels require more system resources, so using the Remoteler Auth Service address is more optimal, as it reduces the load on the proxy.

Connecting Remote Clusters

It is also possible to create reverse tunnels between two Remoteler clusters. This may be useful in the following scenarios:

  • Establishing seamless access across many cloud environments or data centers via a single gateway (Remoteler Proxy Service). This use case is popular with large scale SaaS vendors.
  • Establishing seamless access across many edge environments, where each environment consists of multiple nodes. This use case is common in the energy, retail, and transportation sectors.
  • Establishing secure access to cloud environments owned by other organizations. This use case is popular with managed service providers who manage cloud infrastructure and cloud applications for their clients.

To understand how this works, consider the diagram below, which shows two Remoteler clusters:

  • On the right, we have the ROOT cluster. Its proxy service is accessible via root.example.com and it has two nodes named A and B.
  • On the left, we have the LEAF cluster. Its proxy service is accessible via leaf.example.com and it also has two nodes, also named A and B.
How do we give users access to both clusters?

One approach is to configure both clusters with the same identity storage, possibly Github or Google Apps. A user will have to log in via two different proxies: leaf.example.com and root.example.com.

This will work unless one of the following is true:
  • leaf.example.com may be behind NAT and inaccessible to external users.
  • There could be too many leaf clusters, making manual switching cumbersome.
  • The LEAF  and the ROOT clusters can be managed by different organizations, making it impossible to use the same identity store.

Another approach is to let users go through root.example.com and configure the ROOT cluster to proxy their connections into the LEAF cluster. This capability is called Trusted Clusters in Remoteler documentation.

By creating a reverse tunnel from the LEAF to the ROOT, the ROOT cluster becomes “trusted,” because its users* are now allowed to access the LEAF. The connection from a user to the A node inside the LEAF will look like this:

User Experience

In the scenario above with two trusted clusters, here is how a user session may look:

Audit log

Note that the audit records never leave cluster boundaries. When users connect to LEAF nodes via the ROOT cluster, their actions will be recorded in the audit log of the LEAF cluster.

Try Remoteler Today!