# Session Audit

During an audit, your security team is asked to provide evidence of activity performed during privileged access. The members of the team can show who requested access and who approved it, but demonstrating what actually occurred during that access requires pulling data from multiple systems.

To answer this request, they gather access requests, approval records, and infrastructure logs from multiple systems, then reconstruct events manually, wait for reviewers and without noticing the next review is on the way. This takes time and is difficult to validate. It can leave gaps in audit evidence. As a result, responding quickly and accurately to compliance requirements such as SOC 2, PCI-DSS, or HIPAA becomes more difficult.

Apono’s Session Audit records activity performed during privileged access sessions. When enabled, it captures text-based session activity:

* Actions performed by a user
* When those actions occurred
* Who approved the user's access to the affected resource
* Which access flow allowed access

Apono delivers that data into your customer-owned storage for compliance evidence and reporting. Sensitive session data remains under your control and is not persisted in Apono systems.

{% hint style="info" %} <mark style="color:$primary;">**Scope and limitations**</mark>

**Scope**

Session Audit captures SSH session activity through an Apono connector in AWS environments. It supports ssh and aws-ec2-ssh integrations.

**Limitations**

Session Audit does not support the following:

* Session replay or video
* Real-time monitoring or alerts
* Command blocking or enforcement
* Non-AWS cloud providers (GCP, Azure)
* Full-text search across session content
* Terminal environments with limited or no compatibility (for example, Warp)
* Interactive terminal sessions or commands that obscure input/output streams (for example, `screen`, `tmux`, `vi`)
  {% endhint %}

***

### How Session Audit works

When Session Audit is enabled, user connections are routed through the Apono connector instead of connecting directly to the target resource.

The sequence is:

1. A user is granted access to a resource.
2. Apono generates connection details that point to the connector.
3. The connector routes the session to the target resource.
4. The connector captures session activity as the session passes through it.
5. The connector sends the captured data to two different destinations: customer-managed storage (raw session data) and Apono (session metadata).

#### Data storage model

<table><thead><tr><th width="241.717041015625">Data Type</th><th>Details</th></tr></thead><tbody><tr><td><strong>Raw session data</strong></td><td><p>Includes session activity such as commands, outputs, and session lifecycle events</p><p><strong>Storage</strong>: customer-managed S3 bucket</p></td></tr><tr><td><strong>Session metadata</strong></td><td><p>Includes identifying and operational context such as session ID, user, resource, protocol, timestamps, and request ID</p><p><strong>Storage</strong>: Apono</p></td></tr></tbody></table>

This data separation allows Apono to provide fast filtering and reporting using metadata, while keeping full session content in customer-controlled storage.

***

### Prerequisites

<table><thead><tr><th width="248.99130249023438">Item</th><th>Description</th></tr></thead><tbody><tr><td><a href="https://docs.apono.io/docs/additional-integrations/network-management/ssh-servers"><strong>SSH Servers</strong></a> integration</td><td>SSH Apono integration within an AWS environment</td></tr><tr><td><strong>Apono connector</strong></td><td><p>On-prem connection serving as a bridge between an SSH server and Apono</p><p><strong>Required Version</strong>: 1.7.8</p><p>Learn how to update an existing <a href="/pages/cNMceTvopbZdVqebcrk5">AWS connector</a>.</p></td></tr><tr><td><strong>Connector endpoint</strong></td><td>DNS address used to establish audited SSH sessions</td></tr></tbody></table>

***

### Connector configuration

The Apono connector should be configured according to your deployment type:

* Use the [**EKS**](#eks) steps for Kubernetes-based connectors
* Use the [**ECS**](#ecs) steps for ECS-based connectors.

Both paths prepare the connector, network access, and S3 permissions required for Session Audit.

{% tabs %}
{% tab title="EKS" %}
Follow these steps to configure the connector:

1. Verify AWS Load Balancer Controller is installed on the EKS cluster.

{% hint style="info" %}
If it is not installed, install the [AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/lbc-helm.html) before continuing.
{% endhint %}

2. Obtain the EKS cluster OIDC provider.

{% code overflow="wrap" expandable="true" %}

```bash
export OIDC_PROVIDER=$(aws eks describe-cluster \
  --name <EKS_CLUSTER_NAME> \
  --query "cluster.identity.oidc.issuer" \
  --output text | sed 's#^https://##')

echo $OIDC_PROVIDER

```

{% endcode %}

3. Save the following trust policy as **apono-connector-trust-policy.json**.

{% code overflow="wrap" %}

```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/<OIDC_PROVIDER>"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "<OIDC_PROVIDER>:sub": "system:serviceaccount:<NAMESPACE>:apono-connector-service-account"
        }
      }
    }
  ]
}
```

{% endcode %}

4. Create or update the IAM role trust policy.

{% hint style="info" %}
The connector uses IAM Roles for Service Accounts (IRSA) so its EKS pods can access AWS services through this IAM role. The following trust policy allows only the connector service account in this namespace to assume the role via OIDC.

Replace `<AWS_ACCOUNT_ID>`, `<OIDC_PROVIDER>`, and `<NAMESPACE>` before saving the file. The OIDC provider value should **not** include `https://`.
{% endhint %}

{% tabs %}
{% tab title="Create an IAM role" %}
{% code overflow="wrap" %}

```shellscript
aws iam create-role \
  --role-name "${CONNECTOR_ROLE_NAME}" \
  --assume-role-policy-document file://apono-connector-trust-policy.json
```

{% endcode %}
{% endtab %}

{% tab title="Update an IAM role" %}
{% code overflow="wrap" %}

```shellscript
aws iam update-assume-role-policy \
  --role-name "${CONNECTOR_ROLE_NAME}" \
  --policy-document file://apono-connector-trust-policy.json
```

{% endcode %}
{% endtab %}
{% endtabs %}

5. Confirm that the role trust policy is configured correctly.

{% code overflow="wrap" %}

```shellscript
aws iam get-role \
  --role-name "${CONNECTOR_ROLE_NAME}"
```

{% endcode %}

6. Create and attach the [S3 write policy](#create-a-policy) to the role created or updated above.
7. [Create and configure an S3 bucket](#create-an-s3-bucket).
8. Add the following configuration to your Helm values file, for example **values.yaml**. The `proxyService` block provisions a Network Load Balancer (NLB) that exposes the Apono connector on ports `10020`-`10024`.

{% hint style="success" icon="lightbulb" %}
To control which subnets are used for the NLB, set `service.beta.kubernetes.io/aws-load-balancer-subnets` to a comma-separated list of subnet IDs with no spaces (`"subnet-XXX,subnet-XXX"`).
{% endhint %}

{% code overflow="wrap" %}

```yaml
# values.yaml
apono:
  token: "${APONO_TOKEN}"
  connectorId: "${CONNECTOR_ID}"

serviceAccount:
  manageClusterRoles: true

proxyService:
  enabled: true
  type: LoadBalancer
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
    service.beta.kubernetes.io/aws-load-balancer-subnets: "${SUBNET_IDS}"
```

{% endcode %}

9. Upgrade the Apono connector Helm chart to v2.0.34 with the values file.

{% code overflow="wrap" %}

```shellscript
helm upgrade --install apono-connector apono-connector \
  --repo https://apono-io.github.io/apono-helm-charts \
  --version v2.0.34 \
  -f values.yaml
```

{% endcode %}

10. Annotate the connector service account with the IAM role ARN so the connector can authenticate to S3.

{% code overflow="wrap" %}

```shellscript
kubectl annotate serviceaccount apono-connector-service-account \
-n <NAMESPACE> \
eks.amazonaws.com/role-arn=<ROLE_ARN> \
--overwrite
```

{% endcode %}

11. Verify that your VPN, security groups, and internal DNS allow developer machines to resolve and connect to the NLB DNS name on port `10022`.

{% hint style="info" %}
The NLB is provisioned as internal by default.
{% endhint %}
{% endtab %}

{% tab title="ECS" %}
Follow these steps to configure the connector:

1. Provision a network load balancer (NLB) in front of the ECS connector on port `10022`. **Apono will provide a revised ECS template for this step.**
2. Create and attach the [S3 write policy](#create-a-policy).
3. [Create and configure an S3 bucket](#create-an-s3-bucket).
4. Verify that your VPN, security groups, and internal DNS allow developer machines to resolve and connect to the NLB DNS name on port `10022`.
   {% endtab %}
   {% endtabs %}

#### Create a policy

Follow these steps:

1. Save the following S3 write policy locally as **apono-connector-s3-session-audit-policy.json**.

{% code overflow="wrap" %}

```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "ConnectorS3ReadWrite",
      "Effect": "Allow",
      "Action": ["s3:GetObject", "s3:PutObject"],
      "Resource": "arn:aws:s3:::your-org-apono-*/*",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalTag/apono-connector-s3-access": "true",
          "aws:PrincipalOrgID": "${PRINCIPAL_ORG_ID}"
        }
      }
    },
    {
      "Sid": "ConnectorS3ListBucket",
      "Effect": "Allow",
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::your-org-apono-*",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalTag/apono-connector-s3-access": "true",
          "aws:PrincipalOrgID": "${PRINCIPAL_ORG_ID}"
        }
      }
    }
  ]
}
```

{% endcode %}

2. Create the policy in AWS. The output will include a policy ARN that resembles: `arn:aws:iam::<AWS_ACCOUNT_ID>:policy/apono-connector-s3-session-audit-policy`.

{% code overflow="wrap" %}

```shellscript
aws iam create-policy \
  --policy-name apono-connector-s3-session-audit-policy \
  --policy-document file://apono-connector-s3-session-audit-policy.json
```

{% endcode %}

3. Copy the ARN. This will be used to attach the policy to the IAM role.
4. Tag the connector IAM role so the bucket policy condition matches.

{% code overflow="wrap" %}

```shellscript
aws iam tag-role \
  --role-name "<CONNECTOR_ROLE_NAME>" \
  --tags "Key=apono-connector-s3-access,Value=true"
```

{% endcode %}

5. Attach the policy to the existing connector IAM role.

{% code overflow="wrap" %}

```shellscript
aws iam attach-role-policy \
  --role-name "<CONNECTOR_ROLE_NAME>" \
  --policy-arn "arn:aws:iam::<AWS_ACCOUNT_ID>:policy/apono-connector-s3-session-audit-policy"
```

{% endcode %}

#### Create an S3 bucket

Follow these steps:

1. [Create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) with the following recommended configurations:
   * Public access blocked
   * Ownership controls set to `BucketOwnerEnforced`
   * Server-side encryption set to SSE-S3 (`AES256`) or SSE-KMS for customer-managed keys
   * Versioning enabled
   * Tagged with `apono-connector-s3-access: true`
2. If your bucket and Apono connector are not in the same AWS account, save the following bucket policy locally as **apono-session-audit-bucket-policy.json**.

{% code overflow="wrap" expandable="true" %}

```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowConnectorAccessByTag",
      "Effect": "Allow",
      "Principal": "*",
      "Action": ["s3:GetObject", "s3:PutObject"],
      "Resource": "arn:aws:s3:::<BUCKET_NAME>/*",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalTag/apono-connector-s3-access": "true",
          "aws:PrincipalOrgID": "<PRINCIPAL_ORG_ID>"
        }
      }
    },
    {
      "Sid": "AllowConnectorListBucketByTag",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::<BUCKET_NAME>",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalTag/apono-connector-s3-access": "true",
          "aws:PrincipalOrgID": "<PRINCIPAL_ORG_ID>"
        }
      }
    },
    {
      "Sid": "DenyInsecureTransport",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::<BUCKET_NAME>",
        "arn:aws:s3:::<BUCKET_NAME>/*"
      ],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      }
    }
  ]
}
```

{% endcode %}

3. Apply the bucket policy from the AWS account that owns the S3 bucket to grant the Apono connector permissions to the bucket.

{% code overflow="wrap" %}

```bash
aws s3api put-bucket-policy \
  --bucket <BUCKET_NAME> \
  --policy file://apono-session-audit-bucket-policy.json

```

{% endcode %}

4. Confirm that the bucket policy was applied.

{% code overflow="wrap" %}

```shellscript
aws s3api get-bucket-policy \
  --bucket <BUCKET_NAME>
```

{% endcode %}

:arrow\_up: Return to [EKS](#eks) or [ECS](#ecs).

***

### Enable Session Audit

You must enable Session Audit within the Apono connector and the SSH integration.

Once the [connector configuration](#connector-configuration) is completed, **notify your Apono contact to enable the feature together.**

{% hint style="info" %}
After Session Audit has been enabled, you can review and download session information from the [**Session History**](/docs/audits-and-reports/session-audit/session-history.md) tab.
{% endhint %}

#### Connector enablement

<figure><img src="/files/PI27r3PT786gGqcjD14J" alt="" width="563"><figcaption><p>Edit the connector page</p></figcaption></figure>

Follow these steps to enable Session Audit for the connector:

1. On the [**Connectors**](https://app.apono.io/connectors) tab, in the row of the Apono connector associated with the integration, click **︙ > Edit**. The **Edit Connector** page appears.
2. Toggle **Audit sessions** to **ON**. The toggle will appear green when enabled.
3. Under **Session History Bucket ARN**, enter your S3 bucket ARN.
4. Enter the **Connector Endpoint**.
5. Click **Update Connector**.

#### Integration enablement

<figure><img src="/files/HUEwFlIGdRf9dwdbTUHv" alt="" width="563"><figcaption><p>Audit sessions toggle</p></figcaption></figure>

Follow these steps to enable Session Audit for the integration:

1. On the [**Connected**](https://app.apono.io/catalog/connected) tab, in the row of your SSH integration, click **︙ > Edit**. The **Edit Integration** page appears.
2. Under **Get more with Apono**, toggle **Audit sessions** to **ON**. The toggle will appear green when enabled.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.apono.io/docs/audits-and-reports/session-audit.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
