API use cases

AWS registration

Registration of an asset is performed in a user data script. We provide an example script that works with the standard AWS Linux AMI (though it should work for any Linux AMI).

Setup in /login

We configure the endpoints that come online so that all go into the same Jump Group and are accessed via the same Jumpoint. For this example, we use Jumpoint with ID 1 and a shared Jump Group with ID 1. These are referenced in the script below as JUMPOINT_ID and JUMP_GROUP_ID. Configure access to this Jumpoint and Jump Group as needed.

Generate an API account for your AWS scripts to use, and note the CLIENT_ID and CLIENT_SECRET for use in the script below.

The API Account created does not need access to Vault in this example.

Setup SSH credentials in Vault

If you already have a key pair in AWS you want to use, make sure you have the private key available. If not, open the EC2 section and navigate to Network and Security > Key Pairs in the AWS console. Generate a new key pair and save the private key.

In /login, navigate to Vault > Accounts and add a new generic account. Set the type to SSH and add the username you are using on the AMI (AWS defaults this to ec2-user) as well as the private key. This username is the TARGET_USER in the script below.

At the bottom of the account configuration, associate this account with the Jump Group from above by selecting Jump Items Matching Criteria and selecting the desired Jump Group.

Save the new account.

Once the account is saved, configure a Group Policy to grant users permission to inject it.

Deploy the instances in EC2

EC2 instance initialization is performed with user data scripts. The script below registers a Linux AMI as a Shell Jump with the Jumpoint and Jump Group configured.

Prepare and deploy a Linux AMI in EC2. In the user data field, paste this script:

#!/bin/bash

# SRA API Credentials
export BT_CLIENT_ID=XXX
export BT_CLIENT_SECRET=XXX
export BT_API_HOST=XXX

# The Jump Group and Jumpoint to use for the Jump Item we create
JUMP_GROUP_ID=1
JUMP_GROUP_TYPE=shared
JUMPOINT_ID=1

TARGET_USER=ec2-user
# Query the AWS Meta-data service for information about this instance to use
# when creating the Jump Item
INSTANCE_ID=`curl http://169.254.169.254/latest/meta-data/instance-id`
INSTANCE_IP=`curl http://169.254.169.254/latest/meta-data/public-ipv4`
INSTANCE_NAME=$INSTANCE_IP
http_response=$(curl -s -o name.txt -w "%{http_code}" http://169.254.169.254/latest/meta-data/tags/instance/Name)
if [ "$http_response" == "200" ]; then
    INSTANCE_NAME=$(cat name.txt)
fi

apt update
apt install -y unzip
curl -o btapi.zip -L https://$BT_API_HOST/api/config/v1/cli/linux
unzip btapi.zip

echo "
name=\"${INSTANCE_NAME:-$INSTANCE_IP}\"
hostname=$INSTANCE_IP
jump_group_id=$JUMP_GROUP_ID
jump_group_type=$JUMP_GROUP_TYPE
username=$TARGET_USER
protocol=ssh
port=22
terminal=xterm
jumpoint_id=$JUMPOINT_ID
tag=$INSTANCE_ID
" | ./btapi -k add jump-item/shell-jump

rm name.txt
rm btapi
rm btapi.zip

  • Add the client credentials as BT_CLIENT_ID and BT_CLIENT_SECRET.
  • Add the site’s hostname as BT_API_HOST (just the hostname, no HTTPS).
  • Make sure that TARGET_USER, JUMPOINT_ID, and JUMP_GROUP_ID (and type) are the values configured above.

This script downloads the btapi command line tool and pipes the instance’s data to create a new Shell Jump Item. The Jump Item is available for immediate use once the instance shows online.

This script uses the InstanceId as the item’s tag so that you may easily filter it later when performing cleanup. It also attempts to read the instance’s Name tag to use as the Jump Item’s name field for easy identification later. In order for this to work, you must set Allow tags in metadata to Enable when launching the instance in AWS. If the Name is not available, the instance’s IP address is used instead.

AWS cleanup

Cleaning up terminated AWS Jump Items may be automated in multiple ways, depending on the desired behavior. Here, we show two different methods: a script that may be run on-demand to clean up terminated instances, and an AWS Lambda function and EventBridge rule that is triggered automatically.

On-demand script

If you want to clean up Jump Items on demand, the following script can be run as needed or scheduled to run as needed with a tool like chron.

#!/bin/bash

export BT_CLIENT_ID=XXX
export BT_CLIENT_SECRET=XXX
export BT_API_HOST=XXX

export AWS_ACCESS_KEY_ID=XXX
export AWS_SECRET_ACCESS_KEY=XXX

# Note this requires the AWS CLI tool to be installed
INSTANCE_IDS=$(aws ec2 describe-instances --query 'Reservations[*].Instances[*].[InstanceId]' --filters 'Name=instance-state-name,Values=[terminated]' --output text)

if [[ -z "$INSTANCE_IDS" ]]; then
  exit
fi

for inst in "${INSTANCE_IDS[@]}"; do
  ID=$(echo "tag=$inst" | btapi --env-file=~/.config/aws-api -kK list jump-item/shell-jump | perl -ne '/^0__id=(\d+)$/ && print $1')
  btapi --env-file=~/.config/aws-api delete jump-item/shell-jump $ID
done

AWS hooks

Setting up the hooks in AWS requires two pieces in AWS:

  • A Lambda function to do the cleanup
  • An EventBridge rule to call the Lambda function

The following example is one way to configure these pieces

Create the lambda

This example uses Python, but you can use the same logic for any language you prefer.

This example makes use of the requests, requests_oauthlib, and oauthlib python libraries. To use these, you must create and upload a layer with these dependencies to attach to the lambda. This may be performed from a local Linux machine with the same python version installed that the lambda uses, or you may use the AWS Cloud9 service to spin up a compatible environment.

To create the layer, use the following commands:

mkdir tmp
cd tmp
virtualenv v-env
source ./v-env/bin/activate
pip install requests oauthlib requests_oauthlib
deactivate

mkdir python
# Using Python 3.9
cp -r ./v-env/lib64/python3.9/site-packages/* python/.
zip -r requests_oauthlib_layer.zip python

# Or manually upload the zip under AWS Lambda > Layers
aws lambda publish-layer-version --layer-name requests_oauthlib --zip-file fileb://requests_oauthlib_layer.zip --compatible-runtimes python3.9

With the layer added, navigate to AWS Lambda and create a new function. Select Python as the runtime with the same version used above. The function requires Describe* permissions for EC2 as well as the general AWS Lambda role.

Once the function is created, replace the contents of the generated lambda_function.py file with this script:

import boto3
import os
from oauthlib.oauth2 import BackendApplicationClient
from requests_oauthlib import OAuth2Session

ec2 = boto3.client('ec2', region_name=os.environ.get('AWS_REGION'))

BT_CLIENT_ID = os.environ.get('BT_CLIENT_ID')
BT_CLIENT_SECRET = os.environ.get('BT_CLIENT_SECRET')
BT_API_HOST = os.environ.get('BT_API_HOST')

class API:
    def __init__(self) -> None:
        self.client = BackendApplicationClient(client_id=BT_CLIENT_ID)
        self.oauth = OAuth2Session(client=self.client)
        self.token = 'bad'

    def call(self, method, url, headers=None, data=None, **kwargs):
        def reload_token(r, *args, **kwargs):
            if r.status_code == 401:
                self.refreshToken()
                return self.call(method, url, headers=headers, data=data, **kwargs)
            elif r.status_code > 400:
                r.raise_for_status()

        d = data if method != 'get' else None
        p = data if method == 'get' else None
        resp = self.oauth.request(
            method,
            f"https://{BT_API_HOST}/api/config/v1/{url}",
            headers=headers, json=d, params=p, hooks={'response': reload_token}, **kwargs)

        resp.raise_for_status()

        return resp

    def refreshToken(self) -> None:
        self.token = self.oauth.fetch_token(
            token_url=f"https://{BT_API_HOST}/oauth2/token",
            client_id=BT_CLIENT_ID,
            client_secret=BT_CLIENT_SECRET
        )

client = API()

def lambda_handler(event, context):
    instances = ec2.describe_instances(
        Filters=[
            {'Name': 'instance-state-name', 'Values': ['terminated']}
        ]
    )
    data = []
    
    for r in instances['Reservations']:
        for inst in r['Instances']:
            print(inst)
            d = {
                'id': inst['InstanceId'],
                'state': inst['State'],
                'ip': inst.get('PublicIpAddress'),
                'name': [x['Value'] for x in inst['Tags'] if x['Key'] == 'Name'],
            }
            response = client.call('get', 'jump-item/shell-jump', data={'tag': inst['InstanceId']})
            items = response.json()
            if len(items) > 0:
                item = items[0]
                d['data'] = item
                client.call('delete', f'jump-item/shell-jump/{item["id"]}')
            data.append(d)
            
    return {
        'statusCode': 200,
        'body': data
    }


Next, scroll to the bottom of the page to the Layers panel. Click Add a layer and select the layer that was created above.

This script is designed to read the BT API information from the environment. You must add the BT_API_HOST, BT_CLIENT_ID, and BT_CLIENT_SECRET configuration variables under Configuration -> Environment variables.

Configuring EventBridge

Navigate to Amazon EventBridge > Rules and click Create rule. Name the rule, ensure it is enabled, select Rule with an event pattern, and click Next.

To build the event pattern, choose the AWS Events or EventBridge partner events option in the Event source panel, and then scroll down to the Event pattern panel. Select the Custom patterns (JSON Editor) option, paste the following pattern, and click Next.

{
  "source": ["aws.ec2"],
  "detail-type": ["EC2 Instance State-change Notification"],
  "detail": {
    "state": ["terminated"]
  }
}

For the event target, select AWS Service, then pick Lambda function from the dropdown. For function, select the name of the Lambda created above. Finish creating the rule definition.

Finished

Once the rule and lambda are in place, the lambda is invoked when any EC2 instance moves or is moving to terminated status and is removed from the Jump Item list.

Scripting a new setup

The script below runs through a more complicated automated process. This script sets up the given instance to be a Jumpoint for a VPC and creates a new Jump Group and SSH key in Vault for the VPC. It then grants access to these new resources to a given Group Policy.

This script assumes an Ubuntu Server instance.

Amazon Linux AMIs are not supported as Jumpoint hosts. Jumpoint hosts require GLIBC 2.27 and the Amazon Linux AMIs support only 2.26.
#!/bin/bash
set -euo pipefail
set -x

# SRA API Credentials
export BT_CLIENT_ID=XXX
export BT_CLIENT_SECRET=XXX
export BT_API_HOST=XXX

# Set to the ID of the Group Policy to tie everything together
GROUP_POLICY_ID=XXX

# Set this to the user account for this instance
TARGET_USER=ubuntu
# Query AWS metadta for this instance to data needed when creating items later
INSTANCE_IP=`curl http://169.254.169.254/latest/meta-data/public-ipv4`
macid=$(curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/)
# Using the VPC ID as the base for all our names
NAME_BASE=$(curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/${macid}/vpc-id)

HOME=${HOME:=/home/$TARGET_USER}

# For running as a user
JUMPOINT_BASE_DIR="$HOME/.beyondtrust/jumpoint"
SYSTEMD_DIR="$HOME/.config/systemd/user"
SYSTEMD_ARGS=--user
JUMPOINT_USER=""

if [ "$(whoami)" == "root" ]; then
    # For running as root
    JUMPOINT_BASE_DIR="/opt/beyondtrust/jumpoint"
    SYSTEMD_DIR="/etc/systemd/system"
    SYSTEMD_ARGS=""
    JUMPOINT_USER="--user $TARGET_USER"
fi

# Make the command calls a bit easier to write
ORIG_PATH=$PATH
cwd=$(pwd)
export PATH=$cwd:$PATH

# Ubuntu server does not have unzip by default
sudo apt update
sudo apt install -y unzip

# Download jq into the current directory for ease of parsing JSON responses
curl -L https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64 -o jq
chmod +x jq
curl -o btapi.zip -L https://$BT_API_HOST/api/config/v1/cli/linux
unzip btapi.zip

# Create a Jumpoint for this VPC
jp=$(echo "
name=$NAME_BASE
platform=linux-x86
shell_jump_enabled=True
" | btapi -k add jumpoint)

jpid=$(echo "$jp" | jq '.id')

echo "Created Jumpoint with id [$jpid]"

# Download and run the Jumpoint installer
installer=$(btapi download "jumpoint/$jpid/installer" | jq -r '.file')
chmod +x "$installer"
# Make sure the base install directory exists
mkdir -p "$JUMPOINT_BASE_DIR"

# IMPORTANT: Make sure your linux distro has all the packages needed to install 
# the Jumpoint. Ubuntu server 22 needs these two
sudo apt install -y libxkbcommon0 fontconfig
sh "$installer" --install-dir "$JUMPOINT_BASE_DIR/$BT_API_HOST" $JUMPOINT_USER

# Make sure the systemd service directory exists (mostly for the user mode directory)
mkdir -p "$SYSTEMD_DIR"

# Create the systemd service file
echo "[Unit]
Description=BeyondTrust Jumpoint Service
Wants=network.target
After=network.target

[Service]
Type=forking
ExecStart=$JUMPOINT_BASE_DIR/$BT_API_HOST/init-script start" > "$SYSTEMD_DIR/jumpoint.$BT_API_HOST.service"

if [ "$(whoami)" != "$TARGET_USER" ]; then
    echo "User=$TARGET_USER" >> "$SYSTEMD_DIR/jumpoint.$BT_API_HOST.service"
fi

echo "
Restart=no
WorkingDirectory=$JUMPOINT_BASE_DIR/$BT_API_HOST

[Install]
WantedBy=default.target
" >> "$SYSTEMD_DIR/jumpoint.$BT_API_HOST.service"

# Load the Jumpoint service and start it
systemctl $SYSTEMD_ARGS daemon-reload
systemctl $SYSTEMD_ARGS start "jumpoint.$BT_API_HOST.service"

# Cleanup the installer file
rm -f "$installer"

# Create a Jump Group for this VPC
jg=$(echo "
name=\"$NAME_BASE Jump Group\"
" | btapi -k add jump-group)

jgid=$(echo "$jg" | jq '.id')

# Create an SSH Key for this VPC and add the private key to Vault
# NOTE, you will need to manually asociate this credential to the 
# Jump Group for this VPC in /login
ssh-keygen -f "./key" -P "" -q -t ed25519
touch "$HOME/.ssh/authorized_keys"
cat ./key.pub >> "/home/$TARGET_USER/.ssh/authorized_keys"
priv=$(cat ./key)

vk=$(echo "
type=ssh
name=\"$NAME_BASE SSH\"
username=$TARGET_USER
private_key=\"$priv\"
" | btapi -k add vault/account)

vkid=$(echo "$vk" | jq '.id')

# Cleanup the key
rm -f ./key
rm -f ./key.pub

# Create an SSH Jump Item back to this instance
echo "
name=\"$NAME_BASE Jumpoint\"
hostname=$INSTANCE_IP
jump_group_id=$jgid
jump_group_type=shared
username=$TARGET_USER
protocol=ssh
port=22
terminal=xterm
jumpoint_id=$jpid
" | btapi -k add jump-item/shell-jump

# Modify the Group Policy to grant access to the Jumpoint, Jump Group and Vault Account
echo "jumpoint_id=$jpid" | btapi -k add group-policy/$GROUP_POLICY_ID/jumpoint
echo "jump_group_id=$jgid" | btapi -k add group-policy/$GROUP_POLICY_ID/jump-group
echo "
account_id=$vkid
role=inject
" | btapi -k add group-policy/$GROUP_POLICY_ID/vault-account

# Cleanup the tools downloaded at the top of this script
rm -f jq
rm -f btapi
rm -f btapi.zip

# Reset PATH
export PATH=$ORIG_PATH

Integrate an external chatbot with Remote Support

In this use case, a company wants to have a chatbot provide the initial support to their users but enable it to elevate the session to a representative in Remote Support, where a representative can take over the call. This use case illustrates how various APIs can work together, and is designed to work in three phases, outlined below.

Phase 1: user interacts with the chatbot

The user is receiving support directly from the Chatbot. Remote Support and RS APIs are not involved in this phase. When the chatbot determines that an agent should be involved, the process moves to Phase 2.

Phase 2: the chatbot begins a session in Remote Support

In this phase, the customer is connected to a Representative in Remote Support by the Chatbot. Some of the steps in this phase are optional, and can be customized based on how the interaction best fits into the desired workflow. The interaction should move to Phase 3 when the user needs to interact with Remote Support directly, and the chatbot should no longer be involved.

  1. Start a session in Remote Support by calling the create_virtual_customer action of the Command API.
  2. (Optional) Place the relevant chat history from the chatbot interaction into the session for the Representative to review. There are two ways to do this:
    • Call send_chat_message action of the Command API for each chat message to add it to the session’s chat history.
    • Or, add the entire transcript to a custom attribute on the session using the set_session_attributes action of the Command API.
  3. (Optional) Proxy messages between the customer and the representative. This allows the customer to stay in the same context, but chat with the Representative instead of the chatbot.
    • To send messages from the user to the Representative, the bot should use the send_chat_message action of the Command API.
    • To display chat messages from the Representative to the user, the chatbot should be configured to receive the Someone Sends a Chat Message outbound event and look for inbound messages for the user’s session.

Phase 3: the user interacts with the representative in Remote Support

To move into this phase, the chatbot should direct the user to the URL given as part of the create_virtual_customer response. Once the user has been directed to that URL, the session is fully in Remote Support and the chatbot is no longer involved. Any remaining chatbot UI can be closed at this point.

If the chatbot interface is web based, the create_virutal_customer call can optionally return a click-to-chat URL. In this case, the chatbot can simply redirect to that URL within the same window and the user will transition to Remote Support’s click-to-chat interface.