arrow-left

Only this pageAll pages
gitbookPowered by GitBook
triangle-exclamation
Couldn't generate the PDF for 253 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Documentation and Guides

ABOUT APONO

Loading...

Loading...

Loading...

GETTING STARTED

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

CONNECTORS AND SECRETS

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

AWS ENVIRONMENT

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

AZURE ENVIRONMENT

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

GCP ENVIRONMENT

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

KUBERNETES ENVIRONMENT

Loading...

Loading...

Loading...

Loading...

ADDITIONAL INTEGRATIONS

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Access Discovery

Discover unused permissions and enforce least privilege

Imagine an IAM role created for a staging service. Over time, it was granted administrator access to production. The staging service was later deprecated, and the role has gone unused for months. Yet its permissions remain active. Now multiply that scenario across hundreds or thousands of identities in your cloud environment. How do you find and fix this kind of unused, overly permissive access at scale?

Access Discovery helps you identify and remediate standing access across cloud environments. It combines access analytics, usage tracking, and policy-based recommendations to support the least privilege for both human and machine identities.

At the core of Access Discovery is the concept of a principal, a digital identity with cloud access. This includes IAM users, roles, service accounts, and programmatic credentials. Each principal is assigned one or more policies, which define its permissions, or the specific actions it can perform.

Access Discovery helps you assess and reduce access risk by:

  • Categorizing permissions by privilege level, from low-risk LIST/READ to high-risk Admin/IAM controls

  • Tracking whether principals are active or dormant

  • Scoring each principal based on its permissions, resource sensitivity, and usage

  • Flagging overprivileged principals for targeted remediation

With these insights, you can focus on what matters most: removing unused admin access, quarantining inactive accounts, and right-sizing policies without disrupting legitimate workflows.

Glossary

Commonly used Apono terms

Term
Meaning

Access Flow

A to manage and control access. The Access Flow, set by the admin, determines the: -Requester (the user or group of users) -Resource or bundle of resources -Permission or permissions -Approval flow (automatic or by approver) -Access duration

Visit the page to see how easily an Access Flow definition is created with step by step instructions.

Access Request

Users to resources controlled by Apono's Access Flows using Slack, Teams or CLI. This Access Request is either automatically approved or sent to the flow's approver who must then either .

Every access request is .

Admin role

Admins are users in Apono who integrate Apono with their environment and create and manage Access Flows.

Approver

A user, group of users, manager or shift member who have been listed on a specific Access Flow as those who must an access request.

Bundle

A bundle is a combination of resources and permissions, grouped together so that they can be easily requested and granted together.

Bundles are great for: - - Admins can create a bundle once and use it in different Access Flows with different requesters, approval flows, and access duration. -Ease of use - Requesters can request a bundle of access for the task or incident they are currently handling.

Security and Architecture

Apono helps you manage just-in-time access in a secure, least privilege way

hashtag
Overview

Apono was built and designed with security in mind so that any company is able to use it in their environment.

We applied the same least privilege principles to our product that Apono unlocks for its users:

  • Ensure users receive just the right amount of permissions they need

  • Ensure users receive access only for the limited time they need them

hashtag
Security

hashtag
Apono's secure architecture

The Apono platform is built by two separate components:

  • The Web App

  • The Connector

The web app continuously receives basic data about users, resources and permissions from the connector.

The connector is fully deployed within the organization’s environment and has a limited set of template functions that can be invoked and are fully in the organization control.

This architecture ensures high reliability as well as segregation of environments, keeping any access to the environment within the environment.

hashtag
The Web App security

Our web app is a portal for admins to create and manage integrations and Access Flows.

The portal:

  • Could only be accessed by admins of the system who've authenticated using the organizational identity provider.

  • Doesn't require access to the organization's environment resources. No roles, permissions, privileges, or actions are granted to the app.

  • Integrates with the organizational identity provider as the source of truth for the organizational identities.

hashtag
The Connector security

Our connector is a component you install in your cloud environment (AWS, GCP, Azure, Kubernetes). It communicates with your cloud services and cloud apps using, but not caching or storing, your secrets.

The connector:

  • Is completely within the organization's control, as it is installed in your cloud provider.

  • Can be uninstalled or disconnected at any time without support from Apono.

  • Uses fully visible template functions, mutable by the organization’s environment owner. These functions limit the ability of the connector to only invoke specific actions that are predefined.

👍 The Apono Connector is High Availability

No downtime, no outages, no problem!

Our method helps ensure uptime for your Apono integrations as users request access. Several connector instances will continue provisioning and deprovisioning access as needed.

hashtag
Your data

When you integrate your cloud applications and IdP with Apono, Apono syncs metadata and configuration information continuously. We only sync basic information needed for access management: users, groups, resources and permissions.

Apono:

  • Does not read your data, like datasets, files, documents, code, etc.

  • Does not collect any personal data about your employees, Apono only requires a user's email address.

  • Does not store or cache secrets or credentials.

hashtag
Secrets

Apono does not store or cache any of your secrets.

When a data sync is required, the connector gets the secret from your cloud's Secret Store to access the data it needs. After authenticating, the secret is not saved anywhere.

👍 Credentials rotation as often as you need

When granting access to users, Apono enforces password reset and credentials rotation out of the box to meet the strictest compliance and security standards. Read more .

hashtag
Architecture

hashtag
Apono and AWS

hashtag
Apono and GCP

hashtag
Apono and Azure

How Apono Works

Apono syncs with your apps' data, grants and revokes access

hashtag
How Apono Works

hashtag
Your Questions

  1. How does Apono securely integrate with your environment?

  2. How are Access Flows defined and managed?

  3. How do developers request and approve access?

  4. How do admins manage access logs and audit reports?

Great questions, let's get to it:

hashtag
Integrate with Apono in 3 easy steps

Three easy steps are what it takes to create Just-In-Time and Just Enough permissions for everyone with access to your cloud assets and resources.

hashtag
1. Install a Connector

Connectors are the components that mediate between Apono and your resources to sync data from cloud applications and grant and revoke access permissions.

The Connector does not read, cache or store any secrets, nor does Apono need an account with admin privileges to function. The Connector contacts your secret store or key vault when it needs to sync data or provision access.

Here's how Connectors work:

hashtag
2. Integrate With Cloud Apps

After you've installed the Connector, integrate Apono with your cloud applications to sync data on users, groups, resources and permissions.

Apono currently has integrations for 35+ resource types in AWS, GCP, Azure and Kubernetes platforms, as well as development and CI/CD tools, databases, incident response tools, IdP, ChatOps products, and more. Check the for details and to see the latest.

hashtag
3. Create Access Flows

Create an Access Flows by answering five questions:

  • Who should get access?

  • What can they gain access to?

  • What Actions will they be able to perform?

Fill in the blanks using information from drop-down lists, click Create, and you're done.

hashtag
Apono is Self-Serve

Apono is completely self-serve! Curious? for yourself (no demo needed)!

  • Connect and disconnect the Apono connector and cloud resources at will

  • Using Terraform? Edit your Terraform .tf file to add Apono access management to your resources

hashtag
Add Apono to Your IaC Configurations

Open-source Terraform or AWS ecosystem, Apono is a recognized provider for both.

Prepare Terraform configuration scripts by referring to the Guide. You will also need the to learn what to included in each Apono resource.

Apono's Terraform provider is great for creating and managing integrations, as well as Access Flows!

hashtag
Just Add Slack or Teams

Apono is built with DevX in mind. With Apono, developers can:

  • Request access directly in their favorite tool: , or

  • Gain automatic access without waiting for approval if the Access Flow allows it

  • Get access details directly in Slack, Teams or CLI and use them with ease

No more complex forms, old service systems, proxies and clients to install, or hackling your IT department when you need to get work done.

That's why thousands of engineers use Apono for access requests every month!

hashtag
Audit and Report on Access

Apono automates access logs and audit reports:

  • Every access request and action are

  • Query logs to get exactly what you need, even with our !

  • Periodic reports and compliance needs? No problem! at will. We'll send it directly to your inbox.

Why Choose Apono

Apono is the best solution for just-in-time, temporary access to sensitive cloud resources

Apono lets you automate static access policies by turning them into declarative, dynamic Access Flows. Integrate your cloud environment, CI/CD stack, cloud infrastructure and databases with Apono. Create Access Flows with our declarative UI or in Terraform, and your developers can use Slack, Teams or CLI to request and approve access.

Protect what matters without breaking a sweat.

hashtag
Who is Accessing Cloud Resources Right Now?

Connector

Connectors are very small apps added to a cloud service that allows secure data sync and access management functions to be run by Apono.

End-user/Grantee

The person who has been granted access to a resource or resources according to an Access Flow and will actually be using it.

Identities

Users in the organization, synced from your identity provider.

IdP

Identity Provider; A service that stores and manages digital identities. Companies use these services to allow their employees or users to connect with the resources they need. They provide a way to manage access, adding or removing privileges, while security remains tight. Read more herearrow-up-right.

Integration

Your cloud integrations must be connected with Apono to sync data on identities, resources and permissions and to manage access just-in-time. See the Apono catalogarrow-up-right for a complete list of supported integrations.

Just In Time (JIT)

Just In Time refers to that part of the Access Flow that makes a resource available to a user only when they need it and only as long as it is needed. It is JIT, but it also means that access isn't left and forgotten and left available past the time it is used.

You might also have heard the terms short-lived access, ephemeral access or temporary access.

Permission

The type of action users can perform on a resource. Actions are usually grouped into roles; for example an Admin role usually contains all the possible actions, like read, write, delete, etc.

Some permissions are more powerful than other. For example, a write permission (which allows you to edit a resource) is more powerful than a read permission (which only allows you to view it).

Permissions are at the heart of the Least Privilegearrow-up-right principal; permissions (especially strong ones/those that apply to sensitive or critical resources) should be kept to a minimum and be granted only upon need (just-in-time).

RBAC

Role-based access control (RBAC) systems assign access and actions according to a person's role within the system. Everyone who holds that role has the same set of rights. Those who hold different roles have different rights. Read more herearrow-up-right.

Resource

A resource is a cloud service or other instance that a user can gain access to. For example, repositories, servers, machines, buckets, databases, but also accounts, projects, folders, clusters, etc. Every cloud application artifact can be a resource, and if integrated with Apono - users can request and be granted access to it.

The permission determines which actions the user can perform on the resources.

Resource Type

The resource type is the family the resource belongs to. For example, every S3 bucket instance has a name and path, but all S3 Buckets belong to the S3 Bucket family.

dynamic flow
Access Flowsarrow-up-right
request access
approve or reject it
fully logged and auditable
approve or reject
Dynamic management

S3 Storage

Amazon S3 (Simple Storage Service) object storage integration with Apono, enables Apono granular permission provisioning

This guide has been moved. Please visit this guide instead

hashtag
KMS-encrypted buckets

If your organization encrypts S3 Buckets with Customer Managed Keys (or KMS kets), users need access to the key to be able to decrypt the data when they gain JIT access to a bucket.

Apono supports this use case by granting access to both the bucket and the key when users request access. If S3 Buckets have KMS keys in their metadata, when users request access to S3 Buckets, they also gain access to the KMS key without having to create an extra request.

Development Tools

Network Management

Manage just-in-time, just-enough access to servers, RDPs, internal apps, and more

Doesn't access your data or environment, and only communicates with the Apono connector.

Has no permissions to access the data itself.

  • Does not store any secrets.

  • Round Robinarrow-up-right
    here
    How Apono works - architecture and framework
    How Long should they have the access?
  • Who must Approve the request?

  • Integrations Catalogarrow-up-right
    Try itarrow-up-right
    Terraform Installationarrow-up-right
    Integrations Metadataarrow-up-right
    Slack
    Teams
    CLI
    fully logged
    Public APIarrow-up-right
    Create, save, download and schedule reports
    Do developers have admin/write access or read-only access to production?

    Can you answer that, or must you sort through your cloud resources to find out? Of course, by the time you get to the last one, you'll have to recheck the first because so much time has elapsed, and access changes constantly. While discussing it, how long would it take to revoke access to a production cloud resource in an emergency?

    With Apono, you have a single point of control for managing access without creating a single point of failure.

    hashtag
    Apono Access: Automated, Just-in-Time, Just-Enough

    Use Apono for on-demand access to critical resources. Grant an engineer permission to fix a production issue in an emergency. Grant a data scientist access to a data lake when needed. Just as important is to revoke access once it's no longer needed.

    Apono's permissions are just-in-time and also ephemeral. Access is automatically revoked when no longer needed. No more forgotten privileges or group memberships left open. Access begins and ends according to Access Flow definition.

    hashtag
    Access Management that Scales

    No need to manually change permissions for each resource on your cloud platform every time someone needs access to one of its resources. While access can be granted at a granular level, large-scale environments can be managed efficiently by creating Access Flows, for individuals and groups, to all cloud resources and assets.

    Your environment is always evolving, and so does Apono. Use hierarchies, tags and exclude for dynamic access management.

    hashtag
    Apono Integrates with Terraform

    Are you using Terraform to manage your cloud platforms?

    That's great because Apono is a Terraform providerarrow-up-right and can be provisioned to work alongside your resources by adding code blocks to integrate them into Apono. When you bring up a resource, it will immediately benefit from Apono access management.

    Apono lets you turn static access policies into dynamic Access Flows directly from Terraform. Reuse a simple build file to build the perfect workflows for your organization without ever leaving Terraform.

    hashtag
    Designed for DevX

    With Apono, you will work smarter with less effort to manage and gain access to your cloud resources. You will take control of your cloud resource inventory from one central location.

    Apono's Access Flows prepare for contingencies, emergency access and regular maintenance. Onboarding becomes quick and easy, with our dynamic Access Flows and access bundles. There's no need for writing and maintaining home-grown scripts and complex workflows.

    Your developers can request access bundles and get just the access they need exactly when they need it, no hassle.

    hashtag
    Deployed Via Slack and Teams

    Developers and engineers love ChatOps and CLI, so why should they have to use another interface?

    Apono integrates with Slack, Teams and CLI, so your R&D can use the tools they know to request & approve access, connect to the resources, and, after the access is automatically revoked, request the access again when they need it.

    hashtag
    Speaks Your (Declarative) Language

    Apono has developed a declarative, natural language format for defining access permissions. No need to edit config files. We call it Access Flow, and it looks like this:

    Select a resource and then add (a) who is allowed to gain access (b) what kind of access (roles or permissions) to grant, (c) which specific resources in the integration to allow access to, (d) how long the access should last, (e) should access be approved automatically or by someone in the organization.

    In fact, integrating with Apono and creating Access Flows has proven so intuitive that most Apono customers set up and deploy access control for their entire organizations within two weeks.

    hashtag
    Keeps Your CISO Happy

    Apono doesn't have access to any of your data. Ever.

    How does it work? Install our connector in your environment, direct it to your secret store and you're done! The connector manages the data syncs to our app and handles access provisioning and de-provisioning to your services, without storing or caching secrets.

    We call it SasS with on-premise level of security. And you can tell your customers that they can be confident that access to their data is protected.

    hashtag
    A Home Run With SOX IT Controls

    Apono's comprehensive access management covers your entire cloud, with Access Flows defined for every cloud service and resource type. Need to maintain least-privileges to production environments, financial data, PII, and other critical assets? Check!

    Access requests and granted access are all logged, so you have a reliable audit of the access to your data. As part of your IT compliance reporting to SOX, HIPAA, GDPR, PCI DSS, SOC 2 and others, use Apono's audit logs and reports. Send them to external auditors, internal GRC and security teams, and export logs directly to ITSM, SIEM and compliance tools.

    The Apono Access Management Life Cycle

    Getting started

    Get started with Apono in 10 minutes to get dynamic, centralized, just-in-time access management for your cloud!

    hashtag
    Getting started

    Get a taste of what Apono can do by signing uparrow-up-right (it's free!) and then follow our onboarding wizard.

    You will complete 3 steps to see how easy it is for Admins to manage access with dynamic Access Flows, and how intuitive it is for developers and other end users to request and use Apono access just-in-time.

    Try Apono in AWS, then unlock all of your cloud providers and applications for centralized, streamlined access management.

    hashtag
    Step 1

    hashtag
    Install the connector

    circle-info

    What's a connector? What makes it so secure?

    The Apono Connector is an on-prem connection that can be used to connect resources to Apono and separate the Apono app from the environment for maximal security.

    Read more .

    If you're just getting started with Apono, we recommend using a local connector deployed with docker image.

    You can also install a connector in your cloud environment. Read more .

    circle-info

    You should know:

    1. A local connector is only active as long as the container is running. This means you will have to rerun the command when the container is down.

    2. The local connector leverages your existing AWS Profiles. Make sure you have an AWS Profile with Admin permissions to an AWS account, like playground, staging, dev, etc.

    hashtag
    How to deploy the local connector

    Prerequisites

    • A configured AWS profile in your AWS CLI with these permissions: List and IAM to the AWS account and resources you want to integrate.

    chevron-rightNecessary permissions policy - LISThashtag
    chevron-rightNecessary permissions policy - IAMhashtag

    Steps

    1. Go to the and sign up

    2. In the catalog, pick AWS.

    3. Pick Account

    4. Install a new connector and pick "Local Installation"

    For Linux/mac:

    1. Copy the command that appears in the Apono App and run it in your terminal: bash <(curl -s https://apono-public.s3.amazonaws.com/local-connector/install.sh) --apono-token <TOKEN> The<TOKEN> will appear in the one-liner the UI generates for you.

    2. Follow the interactive prompts and assign:

      1. AWS profile: Apono will leverage the permissions of the profile you pick. If you don't specify the profile, press enter and Apono will use the default profile.

    For Windows

    1. Copy the command that appears in the Apono App and run it in your terminal: iex ([System.Text.Encoding]::UTF8.GetString((Invoke-WebRequest -Uri "https://apono-public.s3.amazonaws.com/local-connector/install.ps1" -UseBasicParsing).Content))

    2. Follow the interactive prompts and assign:

      1. The <APONO TOKEN> that appears in the Apono App under the one-liner command.

    hashtag
    Integrate AWS with Apono

    1. Provide the AWS config:

      1. An integration name of your choosing

      2. The region of the account you'd like to integrate

    hashtag
    Step 2

    hashtag
    Create an Access Flow

    An Access Flow is a smart, dynamic access workflow or policy in human readable language that determines who can request access to what, and what the access duration and approval flow should be. Read more about Access Flows .

    1. Fill in the Access Flow form:

      1. Click Someone to pick who can request the access. You can pick yourself under Users.

      2. Click Select Target to pick the AWS Account you just connected and the cloud service you'd like to manage access to. Duplicate this line to include more cloud services in the Access Flow.

    1. Click Create Access Flow.

    2. In the next screen, click Request Access continue to Step 3.

    hashtag
    Step 3

    hashtag
    Request access

    Developers and other end users in the organizations will request access according to the Access Flows using Slack, Teams, CLI, or the Apono Web Portal.

    1. Fill in the request form:

      1. Pick the integration

      2. Pick the resource type

    1. Click Request

    hashtag
    Gain and use access

    1. The request will appear on the screen with the status Pending

    1. Once the connector provisions the access successfully, the status of the request will change to Granted

    1. Click View access details

    2. The access details can be used to gain the access you just requested! Test it in AWS!

    3. Click Finish onboarding.

    circle-check

    All done!

    Check out the Apono Activity log to see how Apono reports and audits access requests.

    You can also Revoke the access you were just granted to see how Apono deprovisions access when the access time is up.

    AWS Overview

    Cloud computing has become an essential tool for businesses of all sizes. As a provider of many services and tools, Amazon Web Services (AWS) is a cloud environment supported by Apono.

    AWS logo

    The articles in this section will help you connect Apono with your AWS-based resources so that you can effectively manage permissions to these resources.

    Installing a connector with Docker

    To manage access to on-prem resources with Apono, install a connector as a Docker Container

    hashtag
    Intro

    If you want the flexibility of installing the Apono connector on any machine, a docker container is a great alternative.

    hashtag

    Apono Connector for Azure

    The Apono connector is a secure bridge between Apono's access management platform and your Azure cloud resources. It facilitates data synchronization and manages access permissions across your cloud infrastructure.

    The connector runs within your Azure environment via Azure Container Instances (ACI). This architecture ensures both complete operational control and maximum security.

    After installing the connector, you can with Apono and provide just-in-time access based on .


    hashtag
    Key Features

    Azure Integrations

    If your organization uses Azure as a cloud platform, Apono can help you securely manage access to your Azure cloud-based services, subscriptions, and resource groups.

    By identifying and transforming existing privileges, Apono can shift your cloud management from broad permissions to on-demand . Through our integrations, Apono enables you to perform the following access tasks:

    • Limit Access: Discover existing privileges in Azure and convert them to just-in-time Access Flows.

    AWS Integrations

    If your organization uses Amazon Web Services (AWS) as a cloud platform, Apono's AWS integrations can help you securely manage access to your AWS cloud-based services and databases.

    By identifying and transforming existing privileges, Apono can shift your cloud management from broad permissions to on-demand access flows.

    Through our AWS integrations, Apono enables you to perform the following access tasks:

    • Limit Access: Discover existing cloud privileges and convert them to just-in-time access flows.

    Enable Self-Service Access: Allow developers to request access to Azure services, buckets, and instances via Slack.
  • Automate Approval Workflows: Create automatic approval processes for sensitive Azure resources.

  • Restrict Third-Party Access: Grant third-parties (customers or vendors) time-based access to specific services with MFA verification.

  • Review Access: Audit user cloud access, permissions granted, and reasons for access across Azure.

  • access flows
    Azure logo

    Enable Self-Service Access: Allow developers to request access to AWS services, buckets, and instances via Slack.

  • Automate Approval Workflows: Create automatic approval processes for sensitive AWS resources.

  • Restrict Third-Party Access: Grant third-parties (customers or vendors) time-based access to specific S3 buckets, RDS, or EC2 instances with MFA verification.

  • Review Access: Audit user cloud access, permissions granted, and reasons for access across AWS.

  • AWS logo

    Databases and Data Repositories

    hashtag
    Overview

    circle-info

    Is your Data Source a cloud service?

    If it is you can use the specific cloud service integration instead.

    • Azure-Native Deployment: Runs as a container instance in your Azure environment using Azure Container Instances (ACI)

    • Complete Organizational Control: Fully managed within your Azure infrastructure

    • Security-First Design: No secret storage or caching

    • Flexible Installation: Can be uninstalled or disconnected at any time without Apono support

    • Limited Scope: Uses predefined template functions that restrict the connector to specific, authorized actions


    hashtag
    Next Step

    Choose your preferred installation method.

    integrate your resources
    access flows
    Install an Azure connector on ACI using Azure CLIchevron-right
    Install an Azure connector on ACI using PowerShellchevron-right
    Install an Azure connector on ACI using Terraformchevron-right
  • If your organization requires MFA, SSO login, VPN login or other security policies, the local connector using your AWS profile will need them to work.

  • Results:

    1. If installed successfully, you will see this message: Installation complete. You can return to the Apono App

  • Go back to the Apono App and continue to integrate AWS. The local connector should appear on the screen with a green checkmark:

  • Results:

    1. If installed successfully, you will see the container ID that started running.

  • Go back to the Apono App and continue to integrate AWS. The local connector should appear on the screen with a green checkmark:

  • Click Connect
  • Wait for the integration to sync. This may take a few minutes.

  • Results:

    1. You should see a success message indicating that Apono has successfully integrated with AWS Test.

    2. Otherwise, go back and edit the integration to fix the errors that appear on the screen. Learn more here.

  • Click Any to pick the specific resources in the Access Flow by name, by AWS tags, or by excluding specific resources. You can also leave it as Any.

  • Click Permissions to pick the permissions users will be able to request.

  • You can leave the access time as 1 Hour and the approval as Automatic or change them as you'd like.

  • Pick resources
  • Pick permissions

  • Insert a justification

  • here
    here
    AWS CLIarrow-up-right
    Docker enginearrow-up-right
    Apono apparrow-up-right
    here
    Step-by-step guide

    hashtag
    Prerequisites

    1. A docker installed on any machine

    2. An Apono token

      • Find Your Integration Token:

        1. Select any integration in the .

        2. Under the Connector section, select Add a New Connector from the drop-down list

        3. Copy the token displayed toward the bottom of the section. This token is unique per account.

    hashtag
    Guide

    1. In the following command, replace the variables:

      1. Replace APONO-TOKEN with the token you copied in the Prerequisites

      2. For CONNECTOR_ID, insert any name of your choosing

    2. Run the command in the terminal:

    1. That's it!

    hashtag
    Results

    1. If done correctly, you should see your docker Connector in the new integration dropdown list, or in the Connectors pagearrow-up-right

    Analyze an assessment

    Review access risk across principals using filters and tiered insights

    After an assessment is completed, you can assess your security posture on the View Assessment page in both visual and tabular formats:

    • Visual widgets highlight key insights from the assessment and also act as interactive filters.

    • The table below displays detailed data for each principal and can be filtered using the widgets or additional filter controls.

    View Assessment page

    hashtag
    Analyze assessment details

    Follow these steps to analyze the assessment:

    1. On the page, in the row of an assessment, click Explore. The View Assessment page opens.

    circle-info

    The top section of the assessment displays the last assessment date, selected integration, number of accounts, number of identities, number of principals, and the status of the assessment.

    1. Filter the assessment by clicking a and viewing the details in the .

    circle-check

    Clicking a widget to filter the assessment also selects the corresponding criteria in the dropdown filter menus. You can also apply filters directly through the dropdown filter menus.

    Each widget and table column is explained in the following sections. After exploring the assessment, you can .

    hashtag
    Widgets

    Widget
    Description

    hashtag
    Table (Principals)

    circle-check

    You can hover over a row and click Ignore to hide principal that you do not consider a threat.

    Column
    Description

    EC2 via Systems Manager Agent (SSM)

    Apono AWS EC2 Integration utilizes SSM (System Manager) Agent to for JIT access management for AWS VMs

    hashtag
    EC2 via Systems Manager Agent (SSM)

    circle-info

    Have you connected an AWS account?

    Make sure you integrated your AWS account to Apono. Follow this step-by-step guide.

    hashtag
    Intro

    This integration provides the ability to grant users permissions to connect to the EC2 with a secure connection through SSM.

    hashtag
    Prerequisites

    • An integration between Apono and the AWS Organization or Account where the EC2 is.

    • EC2 machine with SSM agent installed. Installed by default in most EC2s

    • End users will need to install the session manager plugin for AWS CLI on the local user's computer.

    hashtag
    Step-by-step guide

    hashtag
    The EC2 instance role

    Follow the steps below to create an EC2 instance role with the AmazonSSMManagedInstanceCore managed policy. Read more .

    1. In the AWS IAM, Click Create new IAM Role

      1. Click Create Role

      2. Choose the AWS Service option

    hashtag
    Integrating Apono with the EC2 instances

    1. In the Apono UI, edit an existing AWS Org or AWS Account integration or create a new one.

    2. Add the EC2 Connect resource type.

    3. Complete the integration and click Integrate.

    hashtag
    Results

    Apono should now discover EC2 machines! You can now to EC2 instances.

    Manage connectors

    Find, rename, and delete an existing Apono connector

    After creating a connector in your AWS, Azure, GCP, or Kubernetes environment, you can use the Apono UI to find, rename, and delete that connector.


    hashtag
    Find a connector

    You can search for a connector to view its related information.

    Connectors page

    Follow this step to locate a connector in the Apono UI:

    1. On the page, in the search bar, enter the name of the connector. All matching connectors appear.

    circle-info

    The Connectors tab displays context information related to each connector:

    • Name

    • Location


    hashtag
    Rename a connector

    triangle-exclamation

    If you change the name of a connector in the Apono UI, you must also change the connector_id param in the installed connector.

    Failure to update the connector_id will cause the integration to stop working.

    Follow these steps to rename a connector:

    1. On the page, in the search bar, enter the name of the connector. All matching connectors appear.

    2. In the row of the connector, click ⠇> Edit. The Edit the Connector page for the connector appears.

    3. Update the Connector Name.


    hashtag
    Delete a connector

    Follow these steps to delete a connector:

    1. Delete the connector within your cloud environment.

    2. On the page, in the search bar, enter the name of the connector. All matching connectors appear.

    3. In the row of the connector, click ⠇> Delete. A confirmation popup window appears.

    circle-info

    If the connector is associated with one or more integrations, a popup window will appear with a link to show the integrations:

    1. Click Show Integrations to see the list of associated integrations.

    2. For each integration, .

    1. Click Yes.

    Disable Locks

    Understand how Apono handles Azure resource locks

    Azure resource locksarrow-up-right protect important cloud resources from being changed or deleted.

    There are two types of locks:

    • CanNotDelete: Allows changes but prevents deletion

    • ReadOnly: Allows viewing but blocks changes and deletion

    If you have set up Azure resource locks, you should enable the Disable Locks setting when integrating Apono with Azure Subscriptions or Management Groups. The Disable Locks setting allows Apono to temporarily remove and later restore locks in order to complete grant or revoke operations on protected resources. To support this, the Apono connector must also be assigned the Tag Contributor role at the appropriate scope, allowing it to add a tag marker to locked resources.

    When Disable Locks is enabled, Apono performs the following operations during access provisioning or revocation:

    1. Checks the target resource and its parent scopes for existing locks.

    2. Adds a tag marker to the resource, if a lock exists.

    3. Removes the lock.

    If the connector fails after removing a lock but before reapplying it, the tag ensures the lock will be restored upon connector restart.

    Kubernetes Integrations

    Learn how to integrate and manage access to your K8s cluster

    If your organization uses Kubernetes for development, Apono's Kubernetes integrations can help you securely manage access to your Kubernetes containers and databases.

    Kubernetes logo

    By identifying and transforming existing privileges, Apono can shift your management from broad permissions to on-demand access flows. Through our integrations, Apono enables you to perform the following access tasks:

    • Limit Access: Discover existing cluster privileges and convert them to just-in-time Access Flows.

    • Enable Self-Service Access: Allow developers to request access to K8s clusters and pods via Slack.

    • Automate Approval Workflows: Create automatic approval processes for sensitive K8s resources.

    • Restrict Third-Party Access: Grant third-parties (customers or vendors) time-based access to specific containers with MFA verification.

    • Review Access: Audit access, permissions granted, and reasons for access across K8s.

    High Availability for Connectors

    Deploy active-active HA instances of the same connector

    Active-active availability refers to a high availability (HA) architecture, where two or more systems are actively handling requests simultaneously.

    HA can provide the following benefits:

    • Provide redundancy by maintaining operations during downtime

    • Distribute requests across multiple active systems to improve load balancing

    Updating a connector in Azure

    Learn how to update a connector through the Azure CLI

    Periodically, you may need to update your Azure connector to help maintain functionality, performance, and security.

    This article explains how to update and redeploy a connector through the Azure CLI.


    hashtag
    Prerequisites

    Item

    GCP Integrations

    Learn how to integrate and manage access to your GCP cloud

    If your organization uses Google Cloud Platform (GCP), Apono's GCP integrations can help you securely manage access to your GCP cloud-based services and databases.

    By identifying and transforming existing privileges, Apono can shift your cloud management from broad permissions to on-demand access flows.

    Through our GCP integrations, Apono enables you to perform the following access tasks:

    • Limit Access: Discover existing privileges in GCP and convert them to just-in-time Access Flows.

    Integrate with Self-Managed Kubernetes

    hashtag
    Overview

    With a connector installed on your Kubernetes platform, the next step is setting permissions for Apono to manage access control.

    hashtag
    Prerequisites

    Updating a Kubernetes connector

    Learn how to update a connector through the Helm CLI

    Periodically, you may need to update your Kubernetes connector to help maintain functionality, performance, and security.

    This article explains how to update a connector through the Helm CLI.


    hashtag
    Prerequisites

    Item

    Installing the Apono HTTP Proxy

    This proxy is used by Elasticsearch, Web App and more.

    hashtag
    Step By Step to installing the HTTP proxy

    hashtag
    Deploy with Kubernetes

    {
        "Version": "VERSION",
        "Statement": [
            {
                "Sid": "SID",
                "Effect": "Allow",
                "Action": [
                    "iam:ListPolicies",
                    "ec2:DescribeInstances",
                    "lambda:ListFunctions",
                    "s3:ListAllMyBuckets",
                    "iam:ListRoles",
                    "ssm:GetParametersByPath",
                    "s3:ListBucket",
                    "ecr:DescribeRepositories",
                    "iam:ListGroups",
                    "secretsmanager:ListSecrets",
                    "tag:GetResources"
                ],
                "Resource": "*"
            }
        ]
    }
    {
        "Version": "VERSION",
        "Statement": [
            {
                "Sid": "SID",
                "Effect": "Allow",
                "Action": [
                    "iam:GetUser",
                    "iam:CreateUser",
                    "iam:GetRole",
                    "iam:CreateRole",
                    "iam:UpdateAssumeRolePolicy",
                    "iam:ListAccessKeys",
                    "iam:CreateAccessKey",
                    "iam:GetRolePolicy",
                    "iam:DeleteAccessKey",
                    "iam:PutRolePolicy",
                    "iam:ListRolePolicies"
                ],
                "Resource": "*"
            }
        ]
    }
    export APONO_TOKEN=[the token you copied from Apono Add Connector flow]
    export CONNECTOR_ID=apono-connector
    
    docker login registry.apono.io -u apono -p $APONO_TOKEN
    
    docker run -e APONO_CONNECTOR_ID=$CONNECTOR_ID -e APONO_TOKEN=$APONO_TOKEN -e APONO_URL=api.apono.io registry.apono.io/apono-connector:v1.7.6
    Catalogarrow-up-right
    From the dropdown list, choose EC2
  • Choose EC2 Role for AWS System Manager. Click Next.

  • Verify that the AmazonSSMManagerInstanceCore policy is added. Click Next

  • Fill the Role name box (for example, ec2-ssm)

  • Click Create role

  • Go back to the Modify IAM Role page

    1. From the dropdown list, choose the new IAM role we created (ec2-ssm)

    2. Click Update IAM role

    3. Pleas note: it takes about 30 minutes for the AWS sync to finish.

  • AWS Integration
    docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agentarrow-up-right
    docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-pluginarrow-up-right
    herearrow-up-right
    create access flows
    Grants or revokes access.
    (Delete-locked resources cannot be granted access.)
  • Reapplies the lock.

  • Disable Locks setting

    Enable Self-Service Access: Allow developers to request access to GCP services, buckets, and instances via Slack.

  • Automate Approval Workflows: Create automatic approval processes for sensitive GCP resources.

  • Restrict Third-Party Access: Grant third-parties (customers or vendors) time-based access to specific services with MFA verification.

  • Review Access: Audit user cloud access, permissions granted, and reasons for access across GCP.

  • \

    Google Cloud logo

    Version

  • Status

  • This information is intended to help you quickly identify specific connectors.

    Click Update Connector.
    Connectorsarrow-up-right
    Connectorsarrow-up-right
    Connectorsarrow-up-right
    delete the integration
    Edit the Connector page
    Deleting a connector

    Tiers

    Calculation based on the Over Privilege percent, Risk Score, and Privilege Permissions percentage.

    Examples:

    • If the Privileged Permissions percentage is over 60% and the Risk Score is greater than 4, the Tier will be Critical.

    • If the Privileged Permissions percentage is over 30%, the

    Overprivilege

    Represents the percentage of permissions not used by a principal within the selected integration

    Overprivilege over time

    Displays the trend of overprivileged permission over the last seven days split between all permissions and privileged permissions (Admin, IAM)

    Dormant (Unused) Principals

    Number of principals who have been inactive within the last 90 days

    High risk Overprivileged

    Number of principals in the highest tier

    Principals by Resource Type

    Number of principals grouped by the following categories:

    • IAM Role

    • IAM User

    • IAM User Access Key

    • Secret

    Principals by Tier

    Number of principals grouped by the following tiers:

    • Critical

    • High

    • Medium

    • Low

    Each tier is calculated based on the Over Privilege percent, Risk Score, and Privilege Permissions percentage.

    Principal

    Name of the principal

    Account

    Account associated with the resource

    Risk Score

    Calculation based on the Principal Risk Level (maximum score of policy actions sensitivity) and the account risk level

    Identities

    Number of human and machine identities assigned to the resource

    Last used

    Number of days since an identity assigned to the resource used the permissions

    Over privilege

    Percentage of unused permissions for the principal

    Access Discoveryarrow-up-right
    widget
    table
    investigate and resolve overprivileged access

    Maximize resource use by employing standby systems

  • Reroute traffic through automatic failover to the remaining active system

  • Apono leverages HA to guarantee uptime to customers. Our on-premise connector can be deployed with several instances. If one instance is down, HA ensures that others are available to continue provisioning.


    hashtag
    Prerequisite

    Item
    Description

    Installed connector

    Active Apono connector

    The connector can be installed in any of the following environments:


    hashtag
    Deploy HA connector instances

    For HA, you can add instances to an existing connector using the same connector ID.

    circle-exclamation

    All connector instances must be the same version. Update any older versions to maintain functionality (AWSarrow-up-right | Azurearrow-up-right | GCParrow-up-right | Kubernetesarrow-up-right).

    Follow these steps to add a connector instance for high availability:

    1. From the Connectorsarrow-up-right page, click Install Connector. The Install Connector page appears.

    2. Select Cloud Installation.

    3. Select a platform for the connector. The permission options appear.

    4. Select a permissions option.

    5. Select an installation method.

    circle-info

    The Apono UI auto-populates the token for the new connector instance.

    1. In the connector installation module, configure the connector ID parameter to share the same value as an existing connector ID in the environment. You can find the connector ID of an existing instance on the Connectorsarrow-up-right page.

    circle-info

    Depending on the environment, the connector ID parameter may appear as any of the following properties:

    • APONO_CONNECTOR_ID

    • apono.connectorId

    • connectorId

    1. Complete the installation of the connector in your environment (AWSarrow-up-right | Azurearrow-up-right | GCParrow-up-right | Kubernetesarrow-up-right).

    Upon completion, you can integrate your HA connectors with your environment.

    Description

    Apono Token

    Account-specific Apono authentication value Use the following steps to obtain your token:

    1. On the page, click Install Connector.The Install Connector page appears.

    2. Click Azure > No, Just Install The Connector > CLI (Container Instance).

    Azure Command Line Interface (Azure CLI)

    that enables interacting with Azure services using your command-line shell

    Resource Group Name

    Name of the Azure

    Subscription ID

    Identifier for the

    User Access Administrator Role

    that enables managing user access to Azure resources

    User Administrator Role

    that enables the following tasks:

    • Create and manage users and groups

    • Reset passwords for users, helpdesk administrators, and user administrators


    hashtag
    Update a connector

    To update an Apono connector for Azure, follow these steps in the shell environment with Azure CLI installed:

    1. Set the APONO_CONNECTOR_ID environment variable to your chosen connector ID.

    2. Set the APONO_TOKEN environment variable to your account token.

    3. Set the SUBSCRIPTION_ID environment variable to the Azure subscription ID.

    4. Set the RESOURCE_GROUP_NAME environment variable to the Azure resource group name.

    5. Set the REGION environment variable.

    6. Run the following command to deploy an updated version of the connector on the Azure Container Instance service.

    7. On the page, verify that the connector has been updated.

    • Cluster admin access to the cluster you'd like to integrate

    • Helm

    • An Apono Kubernetes connector

    circle-exclamation

    Please note! If you installed the Apono connector on the cluster, there is no need to provide the secret in the Add Integration form in the UI.

    The connector already handles the secret ;)

    hashtag
    Integrate Apono with Kubernetes

    hashtag
    Select a Connector

    1. Select Kubernetes from the Catalog.

    2. On the next page, select an existing connector from the drop-down list.

    3. Click Next to view the Kubernetes integration form.

    hashtag
    Integration Form

    1. Name the integration.

    2. Enter the following Kubernetes parameters, which can be found with kubectl:

    • Cluster Name

    1. Secret

      1. If you installed the Apono connector on the cluster, leave this empty. Otherwise:

    • With a GCP secret manager:

      • Project

      • Secret ID

    • With Kubernetes secret manager:

      • Namespace

      • Secret Name

    • With an Azure secret manager:

      • Vault URL

      • Secret Name

    hashtag
    Results

    Integration of Apono with self-managed Kubernetes is now complete.

    hashtag
    Next Steps

    1. Manage usersarrow-up-right and groups. If you have and IdP set up, for example Okta or Azure AD, you may want to integrate Apono in order to sync users and groups.

    2. You can now control access to this resource by defining Access Flowsarrow-up-right.

    3. Make it easy for your users to request access by integrating your Slackarrow-up-right or Teams organization with Apono.

    Description

    Cluster admin access

    Cluster admin access to the cluster to integrate The cluster admin access can be the built-in role or equivalent permission level.

    Helm Command Line Interface (Helm CLI)

    used to manage Kubernetes applications


    hashtag
    Update a connector

    Use the following steps to update an Apono connector for Kubernetes:

    1. In the shell environment, run the following helm upgrade command to pull the most recent connector version.

      Shell

      Parameter
      Description

      apono.connectorId string

      ID for the connector

      apono.token string

      Token value obtained from the Apono UI

    2. On the page, verify that the connector has been updated.

    Set the following env vars:

    KMS_KEY_ID value should be the OidcSignerKey created by Apono-Connector CloudFormation stack:

    • Install envoy proxy with helm:

    K8S_NAMESPACE=
    KMS_KEY_ID=
    helm install envoy-proxy https://apono-io.github.io/apono-helm-charts/envoy-proxy/envoy-1.0.4.tgz \
        --namespace $K8S_NAMESPACE \
        --set-string jwks=`aws kms get-public-key --key-id $KMS_KEY_ID --output text --query PublicKey | awk '{print "-----BEGIN PUBLIC KEY-----\n"$1"\n-----END PUBLIC KEY-----"}' | docker run -i danedmunds/pem-to-jwk:latest | awk '{print "{\"keys\":["$1"]}"}' | openssl base64 -A` \
        --set-string tls=true \ # if destination is https
        --set-string accessLogs=true \ # if you want access logs to be written
        --create-namespace

    Installing a connector on ECS using CloudFormation to manage EKS clusters

    Install the Apono connector on Amazon ECS to manage your EKS clusters in an AWS Organization

    Apono integrates seamlessly with your AWS Organization, using CloudFormation to automate the deployment of all the necessary configurations:

    • Cross-account IAM role with read permissions

    • Amazon SNS topic for event notifications

    • Apono connector, which runs on AWS Elastic Container Service (ECS)

    Once installed, the connector syncs data from cloud applications and enables you to manage access to your Elastic Kubernetes Service (EKS) clusters.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Install the connector

    Follow these steps to install the connector:

    1. Start (steps 1-4).

    2. From the Select Connector dropdown menu, click + Add new connector. The Select connector installation strategy section appears.

    circle-check

    If you choose an existing connector, you must in CloudFormation.

    1. Click Cloud installation > CloudFormation (ECS).

    2. Under Follow these steps to install connector, click Open Cloud Formation. AWS CloudFormation opens. The Create stack page appears with one of Apono's stack templates.

    circle-info

    If you are not already signed in, AWS will prompt you to log in to your AWS Management account.

    1. From the settings dropdown at the top of the page, select your Region.

    2. Enter the Stack name.

    3. Define the following Parameters:

    Manage integrations

    Find, edit, and delete and more for an integration

    After creating an integration, you can use the Apono UI to find, edit, delete, and perform additional actions on that integration.


    hashtag
    Find an integration

    You can search for an integration to view its related information.

    Integrations page

    Follow these steps to locate an integration in the Apono UI:

    1. On the tab, in the search bar, enter the name of the integration. All matching integrations appear.

    2. (Optional) Apply one or more .

    After searching and applying filters, only integrations matching criteria appear on the Connected tab.

    circle-info

    The Connected tab displays context information related to each integration:

    • Name

    • Connector

    hashtag
    Apply filters

    Follow these steps to apply filters:

    1. Click the Filters dropdown menu. The filter options appear.

    2. From the Where dropdown menu, select an option.

    3. From the is dropdown menu, select a value.


    hashtag
    Edit an integration

    Follow these steps to edit an integration:

    1. .

    2. In the row of the integration, click ⠇> Edit. The Edit Integration page for the integration appears.

    3. Update the integration information.

    The integration will re-sync. If the updates are valid, you will get a success message and see synced resources. Otherwise, error messages will be displayed.


    hashtag
    Delete an integration

    Follow these steps to delete an integration:

    1. .

    2. In the row of the integration, click ⠇> Delete. A confirmation popup window appears.

    circle-info

    Be mindful of the following:

    • If your integration is associated with one or more access flows, a popup window will appear listing the access flows. For each access flow, click the link and .

    • If your integration has active access requests, a popup window will appear listing the request IDs. For each request, click the link and

    1. Click Yes.


    hashtag
    Additional integration actions

    In addition to finding, editing, or deleting integrations, you can perform other tasks to manage integrations from the Apono UI.

    hashtag
    View associated integration resources

    Follow these steps to view the associated integration resources:

    1. .

    2. In the row of the integration, click ⠇> Resources. A page of the integration's resources appears.

    hashtag
    Refresh an integration

    Follow these steps to refresh an integration:

    1. .

    2. In the row of the integration, click ⠇> Refresh. Apono syncs the integration.

    Install an Azure connector on ACI using Terraform

    Learn how to deploy a connector in an Azure environment

    Azure Container Instances (ACI) is a managed, serverless compute platform for running containerized applications. This guide explains how to install and configure an Apono connector on ACI in your Azure environment using Terraform.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Install a new connector

    circle-info

    The connector requires the following roles:

    1. Directory Readers - to validate users in Azure

    2. User Access Administrator - to provision and deprovision access in the Management Group

    Follow these steps to set up a new connector:

    1. At the shell prompt, set the Apono environment variables to your account token.

    1. In a new or existing Terraform (.tf) file, add the following provider and module information to create a connector or :

    Enables installing the connector in the cloud environment and managing access to resources

    Enables installing the connector in the cloud environment but managing access to non-Azure resources, such as self-hosted databases

    1. At the Terraform CLI, download and install the provider plugin and module.

    1. Apply the Terraform changes. The proposed changes and a confirmation prompt will be listed.

    1. Enter yes to confirm deploying the changes to your Azure account.

    2. On the page, verify that the connector has been deployed.

    You can now integrate with an .

    Updating a connector in AWS

    Learn how to update a connector through the AWS CLI

    Periodically, you may need to update your AWS connector to help maintain functionality, performance, and security.

    This article explains how to update a connector through the AWS CLI and redeploy the CloudFormation stack with the latest connector template.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Update a connector

    circle-exclamation

    If you're updating an Organization-level connector, follow these steps for connectors installed in the Management account.

    If updating a connector with , reach out to your Apono Customer Success representative.

    Follow these steps to update a connector:

    1. Copy the following Account level or Organization level AWS update script. Be sure to replace AWS_STACK_NAME with your AWS stack name.

    circle-info

    If you have not defined a default region and , you must specify the region and profile in the script:

    Be sure to replace AWS_PROFILE and AWS_SERVER_REGION with your profile and region values.

    1. At your AWS CLI prompt, enter the updated script from the previous step to initiate the update. The AWS CLI will return an object containing the StackId.

    2. In CloudFormation, on the Stack Info tab, confirm that the update has completed:

      1. Go to the page. A list of the stacks in the account are displayed.


    hashtag
    Troubleshooting

    This section details common errors that can occur during the updating process. If an error occurs that is not listed below, please contact your Apono representative.

    chevron-rightAn error occurred (ValidationError) when calling the UpdateStack operation: Stack [stack name] does not exist.hashtag

    This occurs when the incorrect stack name has been included in the update script.

    Use the following steps to correct this error:

    1. Locate and copy the stack name under the Stack name column of the page.

    Auto Discover Azure SQL Databases

    Automatically identify Azure SQL database instances in a Subscription or Management Group for JIT access management

    Apono’s Auto Discovery feature identifies tagged Azure SQL database instances, including MySQL and PostgreSQL. Rather than integrating each instance individually, you can integrate selected databases and their resources at once during your Azure Subscription or Azure Management Group setup.

    circle-exclamation

    This capability requires network access to each discoverable database. If your databases are in different Azure networks, make sure to create an Azure connector for each network.

    Since Auto Discovery uses Azure Resource Graph, direct database access is not required for the initial discovery.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Enable Auto Discovery

    Follow these steps to enable Auto Discovery:

    1. In your Azure SQL database, create a user for the Apono connector. As part of this step, you will also create a secret.

    Key
    Value or Description
    1. In the Apono UI, on the tab, click Azure. The Connect Integrations Group page appears.

    2. Under Discovery, click Azure Management Group or Azure Subscription.

    3. Under Connect Sub Integration, select Database, Table, and Role to control the granularity of discovery in each discovered instance.

    After connecting your Azure Management or Azure Subscription to Apono, you will be redirected to the Connected tab to view your integrations. The new Azure integration, along with sub-integrations for each database instance, initialize during the first data fetch. The integration becomes Active once the process completes.

    Now that you have completed this integration, you can that grant permission to your Azure SQL database resources.


    hashtag
    Troubleshooting

    If SQL database instances appear with errors on your Integrations page, follow these steps:

    1. Check Tags: Verify all required tags are present and correctly formatted.

    2. Connector Permissions: Ensure the Apono connector has necessary permissions to read tags and access secrets.

    3. Network connectivity: Ensure each SQL database instance is accessible by an Apono connector within the same network.

    circle-check

    For any questions about the discovery process, please contact Apono Support.

    Connector IP Allowlist

    Configure outbound access to ensure communication with Apono

    If your organization restricts outbound network access by IP address or port, you must configure your IP allowlist to enable uninterrupted communication between Apono connectors and the Apono cloud infrastructure.

    An IP allowlist defines which destination IP addresses your network permits outbound traffic to reach.

    circle-info

    Configuring an IP allowlist is not required if either of the following use cases applies to your organization:

    • Uses domain-based allowlists, with entries such as api.apono.io and registry.apono.io

    • Allows unrestricted outbound HTTPS traffic


    hashtag
    Network Access Requirements

    To ensure consistent and reliable connector performance, the following endpoints and ports must be accessible from your environment.

    triangle-exclamation

    All network configurations must comply with these requirements by 31 October 2025 to prevent disruption in connector functionality.

    Domain
    Connection Details
    Destination IP Addresses

    hashtag
    Support

    For implementation support or questions regarding these requirements, contact [email protected].

    Azure VM SSH Servers

    How to integrate with your Azure VM SSH Servers with Apono for JIT access

    hashtag
    Overview

    If users need to debug, develop or troubleshoot Azure VM SSH servers, they can request Just-in-Time access to them in Apono!

    Admins can create Access Flows with specific VM SSH servers and build approval and access duration flows for different users, groups, and shifts.

    Upon an approved request, Apono creates a certificate that grants access to the server and makes the requester a member of the group(s) representing the access they need. Apono may also use the user's default Linux group.

    hashtag
    How it works

    hashtag
    Prerequisites

    • Installed Apono connector with network access to the Azure VM SSH Servers

      • Minimal Apono connector version: 1.4.0 (visit the and update the connector if needed)

    • A user with a key pair authentication for Apono to your SSH servers with sudo permissions. Add this line to the sudoers file:

    circle-info

    What's a connector? What makes it so secure?

    The Apono Connector is an on-prem connection that can be used to connect resources to Apono and separate the Apono web app from the environment for maximal .

    Read more about the recommended .

    hashtag
    Step-by-step guide

    1. In the Apono app, navigate to the

    2. Pick the Azure VM SSH integration:

    3. Pick an existing connector or create a new one (see connector prerequisites)

    circle-check

    Apono supports default access to SSH servers, even if no user groups were provided.

    This means users can always log in with their default Linux group.

    hashtag
    Results

    • You will be redirected to the tab.

    • Make sure you see the Azure VM SSH integration as Active. The # of discovered SSH servers will appear in the table under Resources.

    • You can now create Access Flows for Azure VM SSH Servers!

    Create an assessment

    Evaluate access usage across your cloud environments

    Before you can begin identifying and remediating overprivileged access, you must first run an Access Discovery assessment.

    An assessment scans your cloud environments and analyzes how principals use their permissions. This enables Apono to surface unused, excessive, or high-risk access across your infrastructure.


    hashtag
    Prerequisites

    Integrating with Apono

    How Apono integrations work and what to expect

    hashtag
    Integrating with Apono

    hashtag
    Intro

    In order to manage just-in-time access, Apono needs to integrate with your cloud applications. Our integration:

    Installing a connector on AWS ECS using Terraform (AWS Organization)

    Integrate Apono with your AWS Organization for complete cloud discovery and JIT access management to AWS resources

    hashtag
    Intro

    Apono connects with the AWS Organization to discover all accounts and their respective cloud resources and services and manage just-in-time, just-enough access to them.

    This guide lets you integrate to the AWS Organization with Terraform.

    Updating a connector in Google Cloud

    Learn how to update a connector through the Helm CLI

    Periodically, you may need to update your Google Cloud connector to help maintain functionality, performance, and security.

    This article explains how to update a connector through the Helm CLI.


    hashtag
    Prerequisites

    Item

    AWS EC2 SSH Servers

    How to integrate with your EC2 SSH Servers with Apono for JIT access

    hashtag
    Overview

    If users need to debug, develop or troubleshoot AWS EC2 SSH servers, they can request Just-in-Time access to them in Apono!

    Admins can create Access Flows with specific EC2 SSH servers and build approval and access duration flows for different users, groups, and shifts.

    Upon an approved request, Apono creates a certificate that grants access to the server and makes the requester a member of the group(s) representing the access they need. Apono may also use the user's default Linux group.

    module "connector" {
        source = "github.com/apono-io/terraform-modules/azure/connector-with-permissions/stacks/apono-connector"
        aponoToken = $APONO_TOKEN
        connectorId = $EXISTING_CONNECTOR_ID
        resourceGroup = $AZURE_RESOURCE_GROUP
        ipAddressType = // "Private" or "None"
        subnetIds = [$SUBNET_ID]
    }
    export APONO_CONNECTOR_ID=apono-connector
    export APONO_TOKEN=abcd1234-e5f6-7g8h-90123i45678
    helm upgrade apono-connector apono-connector --repo https://apono-io.github.io/apono-helm-charts \
        --set-string apono.token=[APONO_TOKEN] \
        --set-string apono.connectorId=[CONNECTOR_NAME] \
        --set serviceAccount.manageClusterRoles=true \
        --namespace apono-connector \
        --create-namespace
    Over Privilege percentage
    is over 80%, and the
    Risk Score
    is greater than 4, the
    Tier
    will also be
    Critical
    .

    api.apono.io

    • Protocol: HTTPS

    • Port: 443

    • Source Port Range: 32768–60999

    • 54.157.3.253

    • 34.225.239.246

    • 44.205.140.99

    • 98.90.221.221 (New)

    • 98.89.52.211 (New)

    • 34.192.189.115 (New)

    registry.apono.io

    • Protocol: HTTPS

    • Port: 443

    • Source Port Range: 32768–60999

    • 107.22.56.232

    • 98.83.51.175

    • 3.221.81.104

    • 35.172.125.116 (New)

    • 54.90.163.79 (New)

    • 107.21.204.50 (New)

    Enter the AponoConnectorId. This can be any alphanumeric name to identify the connector.

  • Enter your OrganizationId.

  • Enter your OrganizationUnitId.

  • From the Permissions dropdown menu, select Full-Access (Manage IAM).

  • Select one or more SubnetIDs.

  • Select one or more VpcId parameters.

  • Under Capabilities, select I acknowledge that AWS CloudFormation might create IAM resources with custom names.

  • Click Create stack.

  • On the Connectorsarrow-up-right page, verify that the connector has been deployed.

  • Complete the integration (steps 6-10).

  • AWS IAM Role

    IAM role with permissions to manage EKS resources in your AWS Organization

    We recommend AdministratorAccessarrow-up-right for connector deployment, but this policy is not required. Apono supports Amazon’s EKS permission modelsarrow-up-right.

    Full AWS access is not granted to Apono.

    OrganizationID

    Unique identifier of the Organization that will be connected via the integration (ex. o-k012345a67)

    Follow these steps to find your OrganizationID:

    1. In your AWS console settings, click Organization. The AWS accounts page appears.

    2. In the left navigation, click Settings. The Settings page appears.

    3. Under Organization details, copy your OrganizationID.

    OrganizationUnitID

    Root ID for the AWS Organization Unit that will be connected via the integration (ex. r-1a2b)

    Follow these steps to obtain your OrganizationUnitID:

    1. In your IAM Identity Center, expand Multi-account permissions.

    2. Click AWS accounts. The AWS accounts page appears.

    3. In the Organizational structure section, copy the ID from the Root folder. This is the parent organizational unit for all accounts in your organization.

    VPC

    Virtual Private Cloud (VPC) with outbound connectivity

    Subnet

    One or more Subnet IDs within the selected VPC where the connector resources will run

    Permission

    Full access (Manage IAM) permissions to enable the connector to create and manage the required IAM resources during deployment

    integrating your AWS Organization
    updating the connector
    GCParrow-up-right
  • Kubernetesarrow-up-right

  • AWSarrow-up-right
    Azurearrow-up-right

    serviceAccount.managerClusterRoles boolean

    Configures whether the connector also manages access to the cluster on which it is deployed The value of serviceAccount.manageClusterRoles should be based on whether the installation has been set up to manage the cluster roles or not.

    Connectorsarrow-up-right
    cluster-adminarrow-up-right
    Command-line interfacearrow-up-right
    Tag your database instancearrow-up-right based on the authentication method you selected in the previous step. In the table below, the values shown in italics are the exact text you should enter when adding these tags.
  • Complete the Azure Management or Azure Subscription integration (steps 3-10).

  • Apono Connector

    One or more Apono connectors for Azure with network access to your Azure SQL databases

    Minimum Required Version: 1.3.6

    Follow these steps to update an existing connector.

    Azure Permissions

    Permissions to complete the following tasks in your Azure instance:

    • Create and manage Azure Key Vault secrets

    • Tag Azure resources

    • Access to your Azure Subscription or Azure Management Group instance

    vault-url

    URL of the Azure Key Vault containing the secret

    Example: https://mystore.vault.azure.net/

    secret-name

    Name of the secret in Azure Key Vault

    Example: db-credentials

    Azure MySQL
    Azure PostgreSQL
    Catalogarrow-up-right
    create access flows
    Azure SQL instances under Connect Sub Integration

    Resource Types

  • Sync Summary

  • Status

  • This information is intended to help you quickly identify specific integrations.

    (Optional) Click + Add new filter and repeat steps 2-3 to add more filters.
  • Click Apply.

  • Click Update.
    .
    Connectedarrow-up-right
    filters
    Find an integration
    Find an integration
    delete the access flow
    Find an integration
    Find an integration
    Editing an integration
    Deleting an integration
    revoke the access
    Copy the token in step listed on the page in step 1.
    Connectorsarrow-up-right
    Connectorsarrow-up-right
    Open-source toolarrow-up-right
    resource grouparrow-up-right
    Azure subscriptionarrow-up-right
    Azure subscription rolearrow-up-right
    Microsoft Entra ID rolearrow-up-right
    Read more about these Microsoft Entra ID roles herearrow-up-right.

    Apono Token

    Account-specific Apono authentication value

    Use the following steps to obtain your token:

    1. On the Connectorsarrow-up-right page, click Install Connector. The Install Connector page appears.

    2. Click Cloud installation > Azure > Install and Connect Azure Account > Terraform (Container Instance).

    3. Copy the token in step listed on the page in step 1.

    Terraform Command Line Interface (Terraform CLI)

    Toolarrow-up-right that enables interacting with Azure services using your command-line shell

    Azure Cloud Information

    Information for your Azure Cloud instance:

    • Resource group namearrow-up-right

    • Subnet IDsarrow-up-right

    Owner Role (Azure RBAC)

    Azure rolearrow-up-right with the following permissions:

    • Grants full access to manage all resources

    • Assigns roles in Azure RBAC

    Global Administrator

    Microsoft Entra rolearrow-up-right with the following permission:

    • Manages all aspects of Microsoft Entra ID and Microsoft services that use Microsoft Entra identities

    ❗Apono does not require Global Administrator access. This is required for the admin following this guide. ❗

    with permissions
    without permissions
    Connectorsarrow-up-right
    Azure Management Group or Azure Subscription
    export SUBSCRIPTION_ID=abcdef01-23456789-0abc-def012345678
    export RESOURCE_GROUP_NAME=myResourceGroup0816
    export REGION=$(az group show --name $RESOURCE_GROUP_NAME --query location --output tsv)
    az container create --subscription $SUBSCRIPTION_ID --resource-group $RESOURCE_GROUP_NAME --name $APONO_CONNECTOR_ID --ports 80 --os-type linux --image registry.apono.io/apono-connector:<<connectorVersion>> --environment-variables APONO_CONNECTOR_ID=$APONO_CONNECTOR_ID APONO_TOKEN=$APONO_TOKEN APONO_URL=api.apono.io CONNECTOR_METADATA='{"cloud_provider":"AZURE","subscription_id":"'"$SUBSCRIPTION_ID"'","resource_group":"'"$RESOURCE_GROUP_NAME"'","region":"'"$REGION"'","is_azure_admin":true}' --cpu 1 --memory 1.5 --registry-login-server registry.apono.io --registry-username apono --registry-password $APONO_TOKEN --location $REGION --assign-identity --query identity.principalId --output tsv
    export APONO_TOKEN=<APONO_TOKEN>
    export RESOURCE_GROUP_NAME=<AZURE_RESOURCE_GROUP_NAME>
    export SUBNET_ID=[<SUBNET_ID>]
    module "connector" {
        source = "github.com/apono-io/terraform-modules/azure/connector-with-permissions/stacks/apono-connector"
        aponoToken = $APONO_TOKEN
        resourceGroup = $AZURE_RESOURCE_GROUP
        ipAddressType = // "Private" or "None"
        subnetIds = [$SUBNET_ID]
    }
    module "connector" {
        source = "github.com/apono-io/terraform-modules/azure/connector-without-permissions/stacks/apono-connector"
        aponoToken = $APONO_TOKEN
        resourceGroup = $AZURE_RESOURCE_GROUP
        ipAddressType = // "Private" or "None"
        subnetIds = [$SUBNET_ID]
    }
    
    terraform init
    terraform apply

    Under the Stack name column, click the stack name.

  • On the Stack info tab, check the Status.

  • Repeat the update process.

    AWS Stack Name

    In AWS CloudFormation, name of a collection of AWS resources managed as a single unit Use the following steps to retrieve the stack name:

    1. Go to the Stacksarrow-up-right page.

    2. Under the Stack name column, copy the stack name.

    AWS Command Line Interface (AWS CLI)

    Open-source toolarrow-up-right enabling interaction with AWS services using your command-line shell

    AWS Permissions

    Permissionsarrow-up-right enabling the ability to update the stack via AWS CLI

    assumable permissions to the Management accountarrow-up-right
    default profilearrow-up-right
    Stacksarrow-up-right
    Stacksarrow-up-right
    aws cloudformation update-stack --stack-name AWS_STACK_NAME \
        --template-url https://apono-public.s3.amazonaws.com/cloudformation/aws_integration_with_connector_template.yml \
        --parameters ParameterKey=AponoConnectorId,UsePreviousValue=true \
                     ParameterKey=AponoConnectorToken,UsePreviousValue=true \
                     ParameterKey=ExternalID,UsePreviousValue=true \
                     ParameterKey=SubnetIDs,UsePreviousValue=true \
                     ParameterKey=VpcId,UsePreviousValue=true \
                     ParameterKey=S3AWSLogsScanning,UsePreviousValue=true \
                     ParameterKey=S3AWSLogsScanning,ParameterValue=Enabled \
        --capabilities CAPABILITY_NAMED_IAM
    aws cloudformation update-stack --stack-name AWS_STACK_NAME --template-url https://apono-public.s3.amazonaws.com/cloudformation/aws_organization_integration_template.yml \
        --parameters ParameterKey=AponoConnectorId,UsePreviousValue=true \
                     ParameterKey=AponoConnectorToken,UsePreviousValue=true \
                     ParameterKey=AssignPublicIp,UsePreviousValue=true \
                     ParameterKey=ExternalID,UsePreviousValue=true \
                     ParameterKey=OrganizationalUnitId,UsePreviousValue=true \
                     ParameterKey=SubnetIDs,UsePreviousValue=true \
                     ParameterKey=VpcId,UsePreviousValue=true \
                     ParameterKey=S3AWSLogsScanning,UsePreviousValue=true \
                     ParameterKey=S3AWSLogsScanning,ParameterValue=Enabled \
        --capabilities CAPABILITY_NAMED_IAM

    apono ALL=(ALL) NOPASSWD:ALL

  • Optional: User groups representing access to the servers. The default value is "Default", representing access to the server with the user's default Linux group.

  • In the secret store of your choice, create a secret for Apono with the following params:

    1. Key: base64_private_key

    2. Value: the SSH Server private key in base64 format (see SSH key prerequisites) To find the private key in base64 format, run this command : cat /PATH-TO-KEY/key.pem | base64

  • Fill the config:

    1. Integration name: Give the integration a name of your choice

    2. User: set the name of the user you created in the prerequisites for the Apono connector.

    3. User groups (Optional): The names of groups in the server representing the sudoer role (from a local server, puppet/chef, LDAP server, etc., depending on your network setup)

    4. Secret: according to the Secret Store of your choice, insert the secret you created in step 4.

    5. Region (Optional): Select a specific Azure region to integrate. If you pick nothing, all regions will be synced.

  • Connectors Pagearrow-up-right
    security
    Azure Installation Architecture
    Catalogarrow-up-right
    Connected Integrationsarrow-up-right
    Item
    Description

    CloudTrails

    Record of AWS activities that is delivered and stored in an Amazon S3 bucket

    When enabling , the following are required:

    • Trails enabled for all regions and desired accounts to scan

    • Full Management events and Data events enabled

    NOTE: If the trail bucket is located in a different account from the trail itself, add this tag to the trail so Apono can locate it:

    Apono connector & Cloud integration

    On-prem connection serving as a bridge between a and at least one cloud integration with Apono

    Minimum Required Version: 1.7.3

    Set up the Apono connector and cloud organization integration

    circle-check

    If you choose to use an existing connector, be sure to complete the following:

    • Set all the parameters in step 9 below.

    • to version 1.7.3 or greater.

    • Complete step 12 below to finish the cloud organization integration.

    Follow these steps to set up an Apono connector:

    1. On the Catalogarrow-up-right tab, click AWS. The Connect Integrations Group page appears.

    2. Under Discovery, click Amazon Organization.

    3. Select the Permission Boundary resource to allow Apono to temporarily restrict overprivileged access.

    4. Click one or more resource additional types to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the Select Connector dropdown menu, click + Add new connector. The Select connector installation strategy section appears.

    3. Select Cloud installation > CloudFormation (ECS).

    4. Under Follow these steps to install connector, click Open Cloud Formation. AWS CloudFormation opens. The Create stack page appears with one of Apono's AWS Account stack templates associated.

    circle-info

    If you are not already signed in, AWS will prompt you to your AWS user account. Be sure to sign in with your Root user account.

    1. Define the following Parameters:

      1. (Optional) Update the AponoConnectorId with a descriptive name.

      2. From the Permissions dropdown menu, select Full Access (Manage IAM).

      3. From the S3AWSLogsScanning, select Enabled to allow Apono to read Cloudtrail logs.

      4. Select one or several SubnetIDs.

      5. Select a VpcId.

    2. Under Capabilities, select I acknowledge that AWS CloudFormation might create IAM resources with custom names.

    3. Click Create stack.

    4. Complete steps 6-10 of the .

    You can now create your first assessment.

    Item
    Description

    Apono connector

    On-prem connection serving as a bridge between a Google cloud instance and Apono

    Minimum Required Version: 1.7.3 to version 1.7.3 or greater.

    GCP Organization integration

    IMPORTANT: In the Integration Config settings, enter your Google customer ID in the Customer ID (optional) field.

    Your Customer ID is located on the page in the Profile section.

    BigQuery sink filter with audit activity

    BigQuerey sink with audit activity with a filter that includes or does not exclude the following query: protoPayload.@type="type.googleapis.com/google.cloud.audit.AuditLog"

    This log type enables Apono to generate assessments.

    For more information, see Google’s documentation on.

    Groups Reader role

    Role allowing a principle to view group metadata and membership assigned to the service account

    For more information, see Google's documentation to .

    Configure BigQuery Permissions for Apono

    Tag your BigQuery datasets and assign the required IAM roles to allow Apono to access them for discovery and auditing.

    Tag BigQuery datasets

    Follow these steps to tag your datasets:

    1. In your Google Cloud environment, with the following values:

      1. Key: apono_access_discovery_audit_log_sink

      2. Value: true

    Associate BigQuery dataset permissions

    Follow these steps to associate permissions to the service account:

    1. In your shell environment, log in to Google Cloud and enable the API.

    1. Set the environment variables.

    1. Assign predefined roles to the connector service account.

    You can now .


    hashtag
    Create an assessment

    Follow these steps to assess an integration:

    1. On the Access Discoveryarrow-up-right page, click New Assessment. The Create Access Discovery Assessment page appears.

    2. Under Select Cloud Provider, select an environment.

    3. Under Select Integration, select one integration from the list.

    4. Click Assess to evaluate permissions and usage.

    Once configured, assessments will run nightly and present data from the last 7 days.

    After the assessment is completed, click Explore to analyze the assessment and remediate overprivileged access.


    hashtag
    Reassess an assessment

    After an assessment has been created, you can always run a new assessment between the nightly runs.

    Follow these steps to reassess an assessment:

    1. On the Access Discoveryarrow-up-right page, in the row of an assessment, click Explore. The View Assessment page opens.

    2. Click Reassess.

    After the assessment is completed, click Explore to analyze the assessment and remediate overprivileged access.

    1. Syncs data on users, resources and permissions

    2. Automates granting and revoking of users' access to cloud resources

    Each integration requires:

    1. An installed connector in your cloud environment

    2. A specific configuration, which may include:

      1. A role created for Apono

      2. Metadata like proxy address, hostname, port, region, clusters, secret store, etc. To learn more about each integration's required config, visit the integration guide or our guides.

    circle-info

    Apono's unique architecture makes the integration extra secure. Learn more here.

    hashtag
    How it works

    1. Install a connector

      1. A connector can be installed on AWS (using Cloudformation [ECS], Terraform [EKS], CLI [EKS]) , GCP (using CLI [GKE]), Azure (using Terraform or CLI) or Kubernetes (using Terraform or Helm).

      2. Follow this guide NOTE: If you have installed a connector in the past, you may use it for more than 1 integration

    2. Follow the integration guide Per each integration's requirements, supply Apono with:

      1. The role or permission needed to manage access

      2. The metadata to complete the integration NOTE: During this process, you may be required to leave Apono and complete some steps in the source application portal

    3. Give the integration a name

      1. The integration name is used when creating Access Flows

      2. This name will be displayed to end-users when creating access requests

    4. Wait for the first sync to complete

      1. Follow the status in the Integrations page Connected tab. A healthy integration looks like this:

      2. In case of an error, follow our

    5. All set! with your new integration

    This is what a healthy AWS Account integration process looks like when using an existing connector:

    hashtag
    Integration types

    Apono currently supports 3 types of integrations:

    1. Resources - these integrations sync data on resources and permissions. Apono then manages JIT access to these resources by granting and revoking users' access based on the Access Flows.

      1. Cloud infrastructure

      2. Databases

      3. CI/CD and development tools

      4. Network and VPN

      5. IdP groups

    2. User information - these integrations sync data on your users and their attributes, like manager, shift, groups, etc.

      1. Identity providers (IdP)

      2. Incident response/on-call tools

    3. Communications (chat-ops)

    Browse our integrations catalogarrow-up-right in the Apono app.

    hashtag
    Integrating cloud environments

    hashtag
    Overview

    Whether you manage your cloud environment in AWS, GCP or Azure, Apono lets you integrate all your cloud services at once!

    This means you can manage your entire environment with Apono in a single integration: Apono integrates multiple cloud services from the same AWS Account, GCP Project or Azure Subscription.

    In AWS, simply install the connector and secret on any Account you'd like to manage, provide the region and we will do the rest: we'll sync all your resource types, like EC2, RDS, S3 buckets, IAM roles&policies, ECR, EKS, and more all at once.

    In GCP, simply install the connector and secret on any Project you'd like to manage and we will do the rest: we'll sync all your resource types, like BigQuery tables, Spanner, Storage, and more all at once.

    In Azure, simply install the connector and secret on any Subscription you'd like to manage, and we will do the rest: we'll sync all your resource types, like Storage, MySQL, PostgreSQL, and more all at once.

    hashtag
    How it works

    1. Go to the Apono Integrations page and click the Catalog tab.

    2. Pick your cloud provider: AWS, GCP or Azure

    3. Pick the level you'd like to integrate on:

      1. AWS:

        1. Pick Organization to manage access to the SSO Identity Center

        2. Pick Account to sync and manage access to a specific Account and multiple services it contains

      2. GCP

        1. Pick Organization to manage access to the Organization or Folder roles.

        2. Pick Project to sync and manage access to a specific Project and multiple services it contains

      3. Azure

        1. Pick Subscription to sync and manage access to a specific Resource Group and multiple services it contains

    4. Provide Apono with the required configuration, and you're done! We'll sync all the services for you.

    5. You'll be redirected to the Connected tab, where you can see your integrations and all the services or resource types that were synced for it. This is also the place to see and troubleshoot integration errors and create new Access Flows.

    hashtag
    Prerequisites
    • Terraform

    • AWS Profile mgmt-account with Admin privileges in the Organization's Management Account

    • AWS Profile member-account with Admin privileges in one of the Organization's Member Accounts

    • Activate the CloudFormation StackSet service in your management account

    hashtag
    Step by step guide

    1. Go to Integrations catalog, and select AWS integration

    2. Choose Amazon Organization, and in the "Select an Apono Connector", choose "Add new connector"

    3. Copy the token shown in the UI

    4. Run the following Terraform Template:.

    circle-info

    The Terraform template does the following:

    • Installs Apono Connector in a Member Account of the organization

    • Installs CloudFormation Stack in the Management Account of the organization that: > - Creates IAM Role with policies that allow manage access in IAM Identity Center

      • Installs CloudFormation StackSet that creates IAM Role in all member accounts of an Organizational Unit, with policies that allow to list AWS resources

    1. After the installation finishes, copy and save the Management Account Role ARN from the output

    2. Go back to the Amazon Organization integrationarrow-up-right

    3. Choose the connector from the dropdown list

    4. Choose the resource types you want to connect, and click Next

    5. Under name, enter a name for the integration (i.e. AWS Organization)

    6. Under Region, select a single region of the AWS resources you want to integrate.

    7. Under AWS SSO Region, enter the region where the IAM Identity Center is configured

    8. Under SSO Portal, enter your SSO Start URL (i.e. )

    9. In Management Account Role ARN, enter the ARN you copied in step 5

    10. Click Connect

    hashtag
    Results

    The initial connection should now be in progress! After a few minutes, you should see the AWS Org integration as Active on the Integrations page.

    Now, start creating Access Flows for the discovered resources.

    Description

    Apono Token

    Account-specific Apono authentication value Use the following steps to obtain your token:

    1. On the page, click Install Connector. The Install Connector page appears.

    2. Click GCP > Install and Connect GCP Project > CLI (GKE).

    Helm Command Line Interface (Helm CLI)

    used to manage Kubernetes applications

    Owner Role

    that provides full access to most Google Cloud resources

    Project ID

    Identifier for the


    hashtag
    Update a connector

    To update an Apono connector for Google Cloud, follow these steps in the shell environment:

    1. Set the APONO_CONNECTOR_ID environment variable to your chosen connector ID value.

    2. Set the APONO_TOKEN environment variable to your account token.

    3. Set the PROJECT_ID environment variable to the Google Project ID.

    4. Set the GCP_SERVICE_ACCOUNT_EMAIL environment variable.

    5. Set the NAMESPACE to the namespace where the connector is installed.

    6. Run the following helm upgrade command to pull the most recent connector version.

    7. On the page, verify that the connector has been updated.

    hashtag
    How it works

    hashtag
    Prerequisites

    • Installed Apono connector with network access to the AWS EC2 SSH Servers

      • Minimal Apono connector version: 1.4.0 (visit the Connectors Pagearrow-up-right and update the connector if needed)

    • A user with a key pair authentication for Apono to your SSH servers with sudo permissions. Add this line to the sudoers file:

      • apono ALL=(ALL) NOPASSWD:ALL

    • Optional: User groups representing access to the servers. The default value is "Default", representing access to the server with the user's default Linux group.

    circle-info

    What's a connector? What makes it so secure?

    The Apono Connector is an on-prem connection that can be used to connect resources to Apono and separate the Apono web app from the environment for maximal securityarrow-up-right.

    Read more about the recommended Azure Installation Architecturearrow-up-right.

    hashtag
    Step-by-step guide

    1. In the Apono app, navigate to the Catalogarrow-up-right

    2. Pick the AWS EC2 SSH integration:

    3. Pick an existing connector or create a new one (see connector prerequisites)

    4. In the of your choice, create a secret for Apono with the following params:

      1. Key: base64_private_key

      2. Value: the SSH Server private key in base64 format (see SSH key prerequisites) To find the private key in base64 format, run this command : cat /PATH-TO-KEY/key.pem | base64

    5. Fill the config:

      1. Integration name: Give the integration a name of your choice

      2. User: set the name of the user you created in the for the Apono connector.

    circle-check

    Apono supports default access to SSH servers, even if no user groups were provided.

    This means users can always log in with their default Linux group.

    hashtag
    Results

    • You will be redirected to the Connected Integrationsarrow-up-right tab.

    • Make sure you see the AWS EC2 SSH integration as Active. The # of discovered SSH servers will appear in the table under Resources.

    • You can now create Access Flows for AWS EC2 SSH Servers!

    Google Cloud Functions

    Google Cloud Functions enables you to build and connect cloud services by writing single-purpose functions that are attached to events emitted from your cloud infrastructure and services.

    Its serverless architecture frees you to write, test, and deploy functions quickly without having to manage infrastructure setup.

    With this integration, you can connect your internal applications to Cloud Functions and manage access to those applications with Apono.

    triangle-exclamation

    Apono currently supports the original version of Google Cloud Functions, 1st Gen.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Integrate a Google Cloud Function

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 8, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click Cloud Function Custom Integration. The Connect Integration page appears.

    2. Under Discovery, click Next. The Apono connector section expands.

    3. From the dropdown menu, select a connector.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a .

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description
    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    Now that you have completed this integration, you can that grant permission to your internal application.

    Installing a connector on AWS ECS using Terraform

    Create a connector on Amazon Elastic Container Service

    Connectors are secure on-prem components that link Apono and your resources:

    • No secrets are read, cached, or stored.

    • No account admin privileges need to be granted to Apono.

    • The connector contacts your secret store or key vault to sync data or provision access.

    Once set up, this connector will enable you to sync data from cloud applications and grant and revoke access permissions through Amazon Elastic Container Service (ECS).


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Install a connector

    Use the following steps to install an Apono connector for AWS on ECS:

    1. At the shell prompt, define an environment variable named TF_VAR_APONO_TOKEN with your Apono token value.

    1. In a new or existing Terraform (.tf) file, add the following provider and module information to create a connector or .

    circle-exclamation

    When using the following snippets, be sure to use the correct value for assignPublicIp:

    • true: Set when a subnet has an Internet Gateway

    Enables installing the connector in the cloud environment and managing access to resources, such as Amazon RDS, S3 buckets, EC2 machines, and self-hosted databases

    Enables installing the connector in the cloud environment but managing access to non-AWS resources, such as self-hosted databases

    1. At the Terraform CLI, download and install the provider plugin and module.

    1. Apply the Terraform changes. The proposed changes and a confirmation prompt will be listed.

    1. Enter yes to confirm deploying the changes to your AWS account.

    2. On the page, verify that the connector has been deployed.


    hashtag
    FAQ

    chevron-rightCan the Apono Terraform module be pinned to a version?hashtag

    Yes. You can append the version number to the source location with the ?ref=vX.X.X query string.

    The following example pins the version to 1.0.0 for a connector without permissions.

    Auto Discover AWS RDS Instances

    Automatically identify AWS RDS instances in an Account or Organization for JIT access management

    Apono’s Auto Discovery feature identifies tagged AWS RDS instances, including MySQL and PostgreSQL. Rather than integrating each instance individually, you can integrate selected databases and their resources at once during your AWS Account or Organization setup.

    circle-exclamation

    This capability requires network access to each discoverable database. If your databases are in different AWS networks, make sure to create an AWS connector for each network.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Enable Auto Discovery

    Follow these steps to enable Auto Discovery:

    1. In your AWS RDS database instance, create a user for the Apono connector. As part of this step, you will also create a secret.

    chevron-rightIAM Authenticationhashtag
    Tag Key
    Value or Description
    chevron-rightPassword Authenticationhashtag
    Tag Key
    Value or Description
    1. In the Apono UI, on the tab, click AWS. The Connect Integrations Group page appears.

    2. Under Discovery, click Amazon Account or Amazon Organization.

    3. Under Connect Sub Integration, select Database, Table, and Role to control the granularity of discovery in each discovered instance.

    After connecting your AWS Account or AWS Organization to Apono, you will be redirected to the Connected tab to view your integrations. The new AWS integration, along with sub-integrations for each RDS instance, initialize during the first data fetch. The integration becomes Active once the process completes.

    Now that you have completed this integration, you can that grant permission to your AWS RDS resources.


    hashtag
    Troubleshooting

    If RDS instances appear with errors on your Integrations page, follow these steps:

    1. Check Tags: Verify all required tags are present and correctly formatted.

    2. Connector Permissions: Ensure the Apono connector has necessary permissions to read tags and access secrets.

    3. Network connectivity: Ensure each RDS instance is accessible by an Apono connector within the same network.

    circle-check

    For any questions about the discovery process, please contact Apono Support.

    Manage EKS clusters through an AWS Organization

    Create an integration to manage access to EKS resources

    With Elastic Kubernetes Service (EKS) on AWS, EKS simplifies the management complexities of Kubernetes.

    This integration allows Apono to securely manage access to your AWS Elastic Kubernetes cluster by connecting to your AWS Organization using ECS.

    circle-check

    You can also to manage cluster access without an AWS Organization integration.


    Investigate and resolve overprivileged access

    Use insights to quarantine, delete, or right-size permissions

    After and reviewing the results, you can investigate and remediate unused or excessive permissions identified across your environment.

    Using the Recommendations tab, you can review the top overprivileged issues for each principal. Access Discovery provides guided remediation options such as quarantine, deletion, or right-sizing to help reduce unnecessary access.


    hashtag
    Remediate overprivileged access

    Elastic Cloud

    Streamline just-in-time access to Elastic Cloud resources via Apono

    Elastic Cloud is a fully managed Elasticsearch service that allows organizations to deploy, search, and analyze data in real time. Integrating Elastic Cloud with Apono enables automated just-in-time access to Elastic Cloud resources based on request workflows and time-bound policies. This approach ensures secure access provisioning while enforcing least-privilege principles.

    This guide explains how to integrate Elastic Cloud with Apono’s UI.


    hashtag
    Prerequisites

    RabbitMQ

    Create an integration to manage access to a RabbitMQ instance

    RabbitMQ is a message broker used to facilitate asynchronous communication between services in distributed systems.

    Through this integration, Apono helps you discover your RabbitMQ resources and securely manage access to them with just-in-time permissions.


    hashtag
    Prerequisite

    Item

    Update a GCP connector in Cloud Run with CLI

    Deploy the latest Docker image of the Apono connector to your Cloud Run service

    Periodically, you may need to update your Google Cloud connector to help maintain functionality, performance, and security.

    This article explains how to update an existing connector deployed on Google Cloud Run using the CLI.


    hashtag
    Prerequisites

    Item
    --profile AWS_PROFILE --region AWS_SERVER_REGION
    terraform {
      required_providers {
        aws = {
          source  = "hashicorp/aws"
          version = "5.39.1"
        }
      }
    }
    
    provider "aws" {
      alias      = "member_account"
      region     = var.member_account_region
      profile    = "member-account"
    }
    
    provider "aws" {
      alias      = "mgmt_account"
      region     = var.mgmt_identity_center_region
      profile    = "mgmt-account"
    }
    
    
    module "apono-connector" {
      providers = {
        aws = aws.member_account
      }
      source         = "github.com/apono-io/terraform-modules/aws/connector-with-permissions/stacks/apono-connector"
      connectorId    = var.connector_id
      aponoToken     = var.apono_token_connector
      vpcId          = var.member_account_vpc_id
      subnetIds      = var.member_account_subnet_ids
      assignPublicIp = true # change to false if the subnets are configured with NAT gateway
    }
    
    resource "aws_cloudformation_stack" "connector_roles" {
      provider = aws.mgmt_account
    
      name = "apono-organization-integration"
    
      parameters = {
        AponoConnectorId     = var.connector_id
        ConnectorRoleArn     = module.apono-connector.connector_role_arn
        OrganizationalUnitId = var.org_unit_id
      }
    
      capabilities = ["CAPABILITY_NAMED_IAM"]
    
      template_url = "https://apono-public.s3.amazonaws.com/cloudformation/aws_organization_roles_only_integration_template.yml"
    }
    
    output "mgmt_account_role_arn" {
      value       = aws_cloudformation_stack.connector_roles.outputs.ManagementAccountRoleArnOutput
      description = "The Management Account Role Arn parameter for the Apono AWS Organization integration"
    }
    variable "connector_id" {
      description = "A that identifies the Connector."
      type        = string
      default     = "apono-organization-connector"
    }
    
    variable "apono_token_connector" {
      description = "Connector Token that you copied from the Apono App"
      type        = string
    }
    
    variable "member_account_region" {
      description = "The region where the Apono connector will be deployed"
      type        = string
    }
    
    variable "member_account_vpc_id" {
      description = "The VPC ID where the Apono connector will be deployed (example value: vpc-000000000)"
      type        = string
    }
    
    variable "member_account_subnet_ids" {
      description = "List of subnet IDs for the Apono connector (example value: [\"subnet-00000000000\"])"
      type        = list(string)
    }
    
    variable "mgmt_identity_center_region" {
      description = "The region where the IAM Identity Center is configured"
      type        = string
    }
    
    variable "org_unit_id" {
      description = "The Organizational Unit of the accounts to be discoverable by Apono (put the Root Organizational Unit to include all the accounts the organization)"
      type        = string
    }
    
    export APONO_CONNECTOR_ID=apono-google-integration
    export APONO_TOKEN=abcd1234-e5f6-7g8h-90123i45678
    export PROJECT_ID=my-project-12345
    https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-activate-trusted-access.htmlarrow-up-right
    https://mycompany.awsapps.com/start/#/arrow-up-right
    Apply the tagarrow-up-right from the previous step to all BigQuery datasets you want Apono to discover.

    Key: apono-bucket-account-id Value: [ACCOUNTID]

    Upgrade your connector
    AWS organization integration
    create a tagarrow-up-right
    create your first assessment
    CloudTrail trailsarrow-up-right
    cloud instance and Apono
    Upgrade your connector
    Cloud integration with Apono
    Account settingsarrow-up-right
    configuring log sinks and filtersarrow-up-right
    Assign a role to a service accountarrow-up-right
    Copy the token in step listed on the page in step 1.
    Connectorsarrow-up-right
    Connectorsarrow-up-right
    Command-line interfacearrow-up-right
    Google Cloud rolearrow-up-right
    Google projectarrow-up-right

    false: Set shen a subnet has a NAT Gateway

    AdminstratorAccess Role

    AWS rolearrow-up-right that provides full access to AWS services and resources

    Apono Token

    Account-specific Apono authentication value Use the following steps to obtain your token:

    1. On the Connectorsarrow-up-right page, click Install Connector. The Install Connector page appears.

    2. Click AWS > Install and Connect AWS Account. > Terraform (ECS).

    3. Copy the token in step listed on the page in step 1.

    Virtual Private Cloud (VPC) ID

    Unique identifier for a virtual networkarrow-up-right dedicated to an AWS account

    Subnet IDs

    Unique identifier for a specific subnetarrow-up-right within a VPC

    Terraform CLI

    HashiCorp's toolarrow-up-right for provisioning and managing infrastructure

    with permissions
    without permissions
    Connectorsarrow-up-right
    gcloud auth login
    gcloud services enable cloudresourcemanager.googleapis.com
    gcloud services enable iam.googleapis.com
    gcloud services enable cloudidentity.googleapis.com
    gcloud services enable admin.googleapis.com
    export GCP_ORGANIZATION_ID=<GOOGLE_ORGANIZATION_ID>
    export GCP_PROJECT_ID=<GOOGLE_PROJECT_ID>
    export SERVICE_ACCOUNT_NAME=<SERVICE_ACCOUNT_NAME>
    /gcloud organizations add-iam-policy-binding $GCP_ORGANIZATION_ID \
      --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
      --role="roles/iam.securityAuditor"
    
    gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
      --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
      --role="roles/bigquery.user"
    
    gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
      --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
      --role="roles/bigquery.dataViewer"
    
    export GCP_SERVICE_ACCOUNT_EMAIL=apono-connector-iam-sa@$PROJECT_ID.iam.gserviceaccount.com
    helm upgrade apono-connector apono-connector --repo https://apono-io.github.io/apono-helm-charts \
        --set-string apono.token=$APONO_TOKEN \
        --set-string apono.connectorId=$APONO_CONNECTOR_ID \
        --set-string serviceAccount.gcpServiceAccountEmail=$GCP_SERVICE_ACCOUNT_EMAIL \
        --namespace $NAMESPACE \
        --create-namespace
    export TF_VAR_APONO_TOKEN="<APONO_TOKEN>"
    export TF_VAR_REGION="<AWS_REGION>"
    export TF_VAR_CONNECTOR_ID="<APONO_CONNECTOR_NAME>"
    export TF_VAR_VPC_ID="<AWS_VPC_ID>"
    export TF_VAR_SUBNET_IDS="<["SUBNET_ID1","SUBNET_ID2"]>"
    export TF_VAR_TAGS="<{tag1="value1"}>"
    Terraform
    provider "aws" {
        region = "{var.REGION}"
    }
    
    module "apono-connector" {
        source = "github.com/apono-io/terraform-modules//aws/connector-with-permissions/stacks/apono-connector"
        connectorId = "{var.CONNECTOR_ID}"
        aponoToken = "{var.APONO_TOKEN}"
        vpcId = "{var.VPC_ID}"
        subnetIds = "{var.SUBNET_IDS}"
        assignPublicIp = true
        tags = "{var.TAGS}"
    }
    Terraform
    provider "aws" {
        region = "{var.REGION}"
    }
    
    module "apono-connector" {
        source = "github.com/apono-io/terraform-modules//aws/connector-without-permissions/stacks/apono-connector"
        connectorId = "{var.CONNECTOR_ID}"
        aponoToken = "{var.APONO_TOKEN}"
        vpcId = "{var.VPC_ID}"
        subnetIds = "{var.SUBNET_IDS}"
        assignPublicIp = true
        tags = "{var.TAGS}"
    }
    terraform init
    terraform apply
    Terraform
    provider "aws" {
        region = "{var.REGION}"
    }
    
    module "apono-connector" {
        source = "github.com/apono-io/terraform-modules//aws/connector-without-permissions/stacks/apono-connector"
        connectorId = "{var.CONNECTOR_ID}"
        aponoToken = "{var.APONO_TOKEN}"
        vpcId = "{var.VPC_ID}"
        subnetIds = "{var.SUBNET_IDS}"
        assignPublicIp = true
        tags = "{var.TAGS}"
    }
    IT service management (ITSM) tools
    Metadata for Integration Configarrow-up-right
    troubleshoot guide
    Create Access Flows
    User groups (Optional): The names of groups in the server representing the sudoer role (from a local server, puppet/chef, LDAP server, etc., depending on your network setup)
  • Secret: according to the Secret Store of your choice, insert the secret you created in step 4.

  • secret store
    prerequisitesarrow-up-right

    Instructions for accessing this integrations's resources

    Custom Parameters

    Key-value pairs to send to the Google Cloud Function For example, you can provide a Google Function with a redirect URL that is used for internal provisioning access and passed as part of the action requests.

    Project ID

    ID of the project associated with the Cloud Function

    Region

    Location of the Google Cloud Function instance

    Function Name

    Name of the Google Cloud Function

  • Click Next. The Get more with Apono section expands.

  • Define the Get more with Apono settings.

    Setting
    Description

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

  • Click Confirm.

  • Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    Apono Connector

    On-prem connection serving as a bridge between your Google Function and Apono, deployed with a GCP service account Minimum Required Version: 1.5.3 Use the following steps if you need to update an existing connector.

    Cloud Function (1st gen)

    Named function set up within Cloud Functionsarrow-up-right To allow the Apono connector to call the Cloud Function, add the Cloud Functions Invoke and Cloud Functions Viewer roles to the apono-connector service account apono-connector-iam-sa for that Cloud Function.

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    Catalogarrow-up-right
    GCP connector
    create access flows

    Access Details

    Tag your database instancearrow-up-right based on the authentication method you selected in the previous step. In the tables below, the values shown in italics are the exact text you should enter when adding these tags.

    AWS region where the secret is stored

    AWS RDS MySQL under Connect Sub Integration
  • Complete the Amazon Account or Amazon Organization integration (steps 3-10).

  • Apono Connector

    One or more Apono connectors for AWS with network access to your AWS RDS databases

    Minimum Required Version: 1.5.3

    Follow these steps to update an existing connector.

    AWS Permissions

    Permissions to complete the following tasks in your AWS instance:

    • Create and manage AWS Secrets Store secrets

    • Tag RDS instances

    auth_type

    iam-auth

    apono-connector-id

    ID of the Apono connector in the same AWS Account or AWS Organization as the database

    auth_type

    user-password

    apono-connector-id

    ID of the Apono connector in the same AWS Account or AWS Organization as the database

    apono-secret

    ARN of the secret containing the database credentials

    RDS PostgreSQL
    AWS RDS MySQL
    Catalogarrow-up-right
    create access flows

    region

    hashtag
    Prerequisites
    Item
    Description

    Apono Connector

    The Apono connector serves as the bridge between AWS and Apono. Learn how to .

    EKS Access Entries

    Connection between EKS permissions and an IAM identity

    EKS access entries must be enabled for Apono to discover and manage EKS clusters. Access entries define how IAM principals are granted access to Kubernetes resources.

    Learn how to .


    hashtag
    Integrate an AWS Organization with EKS resources

    Integrating an AWS Organization
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click AWS. The Connect Integrations Group page appears.

    2. Under Discovery, click Amazon Organization.

    3. Select the EKS Cluster resource type to sync with Apono. You can select other resource types as well.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions to install the connector to manage EKS clusters.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Get more with Apono section expands.

    4. Define the Get more with Apono settings.

      Setting
      Description
    5. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your AWS Organization’s EKS clusters.

    integrate directly with EKS

    Follow these steps to remediate overprivileged access:

    1. On the Access Discoveryarrow-up-right page, in the row of an assessment, click Explore. The View Assessment page opens.

    2. Filter the assessment by defining the filters or clicking a widget and viewing details in the table.

    3. In the table, click the row of a principal. The Principal Details panel opens and displays information about the principal.

    Field
    Description

    Account

    Account where the principal is stored

    Risk Score

    Calculation based on the Principal Risk Level (maximum score of policy actions sensitivity) and the account risk level

    ARN

    Amazon resource name of the principal

    Identities

    Number of human and machine identities

    Last Used

    Last use date of the principal

    Over Privilege

    Overall percentage of overprivileged permissions

    Beside this value in parentheses is the overprivilege percentage for high-risk permissions (Admin, AIM).

    1. On the Recommendations tab, expand a recommendation category to view the suggested summary:

      • Dormant Principal Detected: Principals that have not been used within the past 90 days

      • Unused Privileged Permissions Detected: High-risk actions assigned to but not used by a principal

      • Overprivileged Policy Detected: Policy assigned to a principal that includes unused actions

    circle-info

    The Recommendations tab displays the top three overprivileged issues.

    To support further investigation, you can explore the additional tabs:

    • Used By shows the identities that have used the principal.

    • shows the permissions associated with each policy, including used and unused actions by privilege level.

    As you resolve the initial recommendations, additional issues will appear in the Recommendations tab until all are addressed.

    1. Click How to Fix. A pop-up window appears.

    2. Complete the fix based on the type of recommendation.

    chevron-rightDormant Principal Detectedhashtag

    Quarantine Principal

    This approach uses an Automatic Access Flow to restrict a principal's access using an AWS Permission Boundary until it can be reviewed or safely deleted.

    Follow this step to block unused permissions:

    1. On the Quarantine Principal tab, click Remediate to limit access within the dedicated access flow.

    Apono will add the principle to a Permission Boundary that is always active, until the admin disables the Access Flow or deletes the principle.


    Delete Principal

    This approach removes the principal from your AWS environment.

    Follow these steps to delete the principal:

    1. On the Delete Principal tab, copy the code.

    2. Run the code in your AWS CLI to remove the principal from your AWS account.

    chevron-rightUnused Privileged Permissions Detectedhashtag

    This approach uses an Automatic Access Flow to restrict a principal's access using an AWS Permission Boundary until it can be reviewed or safely deleted.

    Follow this step to block unused permissions:

    1. On the Custom Quarantine tab, click Remediate to limit access within the dedicated access flow.

    Apono will add the principle to a Permission Boundary that is always active, until the admin disables the Access Flow or deletes the principle.

    chevron-rightOverprivileged Policy Detectedhashtag

    Custom Quarantine

    This approach temporarily restricts sensitive actions until the policy is reviewed or replaced.

    Follow these steps to block unused actions:

    1. On the Custom Quarantine tab, click Remediate to deny actions within the dedicated access flow.

    2. Copy the deny rule JSON provided by Apono.

    3. In your AWS environment, create a deny rule using the Apono-provided JSON. This rule will prevent the principal from using the unused actions detected in its policy.


    Right-size

    This approach updates the policy.

    Follow these steps to update the policy:

    1. On the Right Size Policy tab, copy the code.

    2. In AWS, replace the existing policy definition with the new, least-privilege policy definition that contains only used permissions.

    hashtag
    Used By

    The Used By tab displays the human and machine identities that have used the principal.

    Machine Identities and Human Identities sections of the Used By tab

    This view helps you trace usage and validate whether access is still needed. You can expand the row of an identity to view the details of the Last 5 logins:

    • User Agent

    • Source IP

    • Date

    hashtag
    Used For

    The Used For tab displays the policies associated with the selected principal. Each policy summarizes the number of used and unused permissions, organized by privilege level.

    Analysis tab

    Unused access at higher privilege levels (such as Admin or IAM) represents increased risk and should be prioritized for review to reduce risk.

    Follow these steps to remediate a policy:

    1. On the Used For tab, expand a policy.

    circle-info

    The Analysis tab shows all privilege levels and shows the number of used and unused permissions based on observed activity within the last 90 days.

    The Current policy tab shows the policy JSON.

    1. Click Right-size. A pop-up window appears.

    2. Quarantine or right-size the policy to reduce unnecessary access.

    chevron-rightCustom Quarantinehashtag

    This approach temporarily restricts sensitive actions until the policy is reviewed or replaced.

    Follow this step to block unused actions:

    1. On the Custom Quarantine tab, click Remediate to deny actions within the dedicated access flow.

    chevron-rightRight-sizehashtag

    This approach updates the policy.

    Follow these steps to update the policy:

    1. On the Right Size Policy tab, copy the code.

    2. In AWS, replace the existing policy definition with the new, least-privilege policy definition.

    running an Access Discovery assessment
    Principal details panel
    Item
    Description

    Elastic Cloud API key

    Unique key generated in Elastic Cloud to authenticate connection with Apono

    Learn how to with Elastic Cloud.

    NOTE: For the key to authenticate an integration with Apono, you must provision it with the .

    Elastic organization ID

    Unique identifier for your Elastic Cloud organization

    Apono connector

    On-prem connection serving as a bridge between your Elastic Cloud instance and Apono:

    Apono secret

    Value generated with the credentials of the user you create based on your Elastic Cloud API account key and user key:

    • "api_key": <ELASTIC_API_KEY>

    Apono does not store credentials. The Apono Connector uses the secret to communicate with services in your environment and separate the Apono web app from the environment for maximal .


    hashtag
    Integrate Elastic Cloud

    Elastic Cloud resource tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click Elastic Cloud. The Connect Integration page appears.

    2. Under Discovery, click Next. The Apono connector section expands.

    3. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWS, Azure, GCP, Kubernetes).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. .

    circle-info

    If you select the Apono secret manager, enter the value of your Elastic Cloud API Keyarrow-up-right.

    1. Click Next. The Get more with Apono section expands.

    2. Define the Get more with Apono settings.

      Setting
      Description

      Custom Access Details

      (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    3. Click Confirm.

    chevron-right💡 Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    hashtag
    Usage

    Now that the integration is complete, you can add Elastic Cloud to define the resources in an access flow. This allows requesters to access Elastic Cloud resources securely based on your approval and provisioning rules.

    Follow the guidance in these articles to define the resource using Elastic Cloud:

    • Define the resource (Self Serve Access Flows)

    • Define the resource (Automatic Access Flows)

    Description

    RabbitMQ Admin Access

    User account with admin permissions to create a new user


    hashtag
    Create a dedicated Apono user

    Follow these steps to create a dedicated user for Apono:

    1. In the RabbitMQ Management portal, on the Admin tab, under Add a user, enter a Username such as apono_connector.

    2. Set a strong Password. Be sure to save this password to create a secret later.

    3. For Tags, click Admin to assign administrative privileges to the user.

    4. Click Add user.

    5. Copy the URL of the page without the path for use during the integration.

    6. Create a with the credentials from steps 1-2. Use the following key-value pair structure when generating the secret. Be sure to replace #PASSWORD with the actual value. If you used a different name for the user, replace apono_connector with the name you assigned to the user.

    You can now integrate RabbitMQ.


    hashtag
    Integrate RabbitMQ

    RabbitMQ tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform. In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click RabbitMQ. The Connect Integration page appears.

    2. Under Discovery, click one or more resource types to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage Access Flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector.

    circle-info

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWS, Azure, GCP, Kubernetes).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify the integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. .

    5. Click Next. The Get more with Apono section expands.

    6. Define the Get more with Apono settings.

      Setting
      Description
    7. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your RabbitMQ instance.

    Description

    Apono token

    Account-specific Apono authentication value

    Follow these steps to obtain your token:

    1. On the page, click Install Connector. The Install Connector page appears.

    2. Click GCP > Install and Connect GCP Project > CLI (Cloud Run).

    Google Cloud CLI

    used to manage Google Cloud resources

    Google Cloud roles

    that provides Owner permissions for the project or organization

    Project implementation role:

    • Owner

    Organization implementation roles:

    • Owner

    Google Cloud information

    Information for your Google Cloud instance

    Google-defined values:

    • Organization ID (GCP_ORGANIZATION_ID): (For Organization connectors only) Unique identifier of your GCP organization

    • Project ID (GCP_PROJECT_ID): (For Organization


    hashtag
    Update a connector

    To update an Apono connector on Google Cloud Run, follow these steps in your shell environment:

    1. Log in to Google Cloud.

    1. Set the environment variables.

    circle-info

    The GCP_ORGANIZATION_ID is only required for Organization connectors.

    1. Authenticate with the Apono Docker registry.

    1. Pull and tag the latest connector image.

    1. Configure Docker for your GCP region.

    1. Push the image to GCP Artifact Registry.

    1. Deploy the updated image to Cloud Run.

    1. On the Connectorsarrow-up-right page in the Apono UI, verify that the connector is updated.

    Amazon Redshift

    Integrate with Apono to view existing permissions and create Access Flows to Amazon Redshift clusters

    Amazon Redshift is a fast, scalable, and secure fully managed data warehouse service in the cloud, serving as a primary data store for vast datasets and analytic workloads. Amazon Web Services (AWS) enables businesses to analyze their data using standard SQL and existing business intelligence tools, promoting insightful decision-making and integration with various AWS services.

    Through this integration, Apono helps you securely manage access to your Amazon Redshift instance.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Integrate Amazon Redshift

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click Amazon Redshift. The Connect Integration page appears.

    2. Under Discovery, click Next. The Apono connector section expands.

    3. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-info

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating an .

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description
    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    Now that you have completed this integration, you can that grant permission to your Amazon Redshift instance.


    hashtag
    Troubleshooting

    Refer to for information about errors that may occur.

    Apono Connector for Kubernetes

    How to install a Connector on a Kubernetes cluster to integrate Kubernetes with Apono

    hashtag
    Overview

    To integrate with Kubernetes and start managing JIT access to Kubernetes resources, you must first install a connector in your Kubernetes cluster.

    This is can be done by one of the following methods:

    1. Helm

    2. Terraform

    circle-info

    What's a connector? What makes it so secure?

    The Apono Connector is an on-prem connection that can be used to connect resources to Apono and separate the Apono web app from the environment for maximal .

    hashtag
    With Helm

    An Apono connector is installed in the cloud platform managing your Kubernetes resource. The installation is made by running a Helm command with the necessary parameters.

    hashtag
    Prerequisites

    • An existing Kubernetes project on one of the following platforms:

      • Google Kubernetes Engine (GKE)

      • Elastic Kubernetes Service (EKS)

    hashtag
    Step-by-step guide

    hashtag
    Find Your Integration Token

    1. Select any Kubernetes integration in the Catalog.

    circle-info

    You can install a new connector from any Kubernetes New Integration form. Pick the one relevant to your network.

    Connectors for EKS, GKE, AKS and self-managed Kubernetes work in the same way.

    1. From the drop-down list on the next page select Add a New Connector, and then select Help.

    2. Copy the token displayed toward the bottom of the page.

    hashtag
    Install the Connector

    Run the following Helm command in a terminal:

    Without permissions

    • If you would like to install the connector in Kubernetes, but not grant Apono access to read or manage access to Kubernetes resources, use this code:

    With permissions

    • If you would like to install the connector in Kubernetes and grant Apono access to read and manage access to Kubernetes resources, use this code:

    Where:

    • [APONO_TOKEN] is the token copied from the integration page in the previous step.

    • [CONNECTOR_NAME] is any name you choose to give the connector.

    Helm will finish with a message that the apono-connector has been installed.

    circle-check

    Interested in HA for the connector?

    Add this variable to the Helm chart to create one or more replicas of the Apono connector instance:

    --set-string replicaCount=<number_of_replicas>

    hashtag
    Results and next steps

    The Kubernetes Connector is now installed.

    1. Return to the Add new integration form from step 1 for EKS, GKE, AKS or self-managed Kubernetes.

    2. The Connector is found by the form, marked by a green checkmark

    circle-check

    You can now integrate Apono with your Kubernetes instance

    Complete the integration with , , or .

    hashtag
    Troubleshooting

    • If you are managing more than one Kubernetes cluster, you must be certain that the current context points to the cluster into which the Apono connector is to be added.

      • Get the current context with kubectl config current-context

      • Set the current context with kubectl config use-context [clustername]

    hashtag
    With Terraform

    An Apono connector is installed in the cloud platform managing your Kubernetes resource. The installation is made by adding an Apono module to your Terraform configuration.

    hashtag
    Prerequisites

    • A Kubernetes project on one of the following platforms:

      • Google Kubernetes Engine (GKE)

      • Elastic Kubernetes Service (EKS)

    hashtag
    Step-by-step guide

    hashtag
    Find Your Integration Token

    1. Select any Kubernetes integration in the Catalog.

    circle-info

    You can install a new connector from any Kubernetes New Integration form. Pick the one relevant to your network.

    Connectors for EKS, GKE, AKS and self-managed Kubernetes work in the same way.

    1. From the drop-down list on the next page select Add a New Connector, and then select Terraform.

    2. Copy the token displayed toward the bottom of the page.

    hashtag
    Edit the Terraform Configuration

    1. Add the following to your Terraform module.

    hashtag

    • If you would like to install the connector in Kubernetes, but not grant Apono access to read or manage access to Kubernetes resources, use this code:

    hashtag

    • If you would like to install the connector in Kubernetes and grant Apono access to read and manage access to Kubernetes resources, use this code:

    Where:

    • [APONO_TOKEN] is the token copied from the integration page in the previous step.

    • [CONNECTOR_NAME] is any name you choose to give the connector.

    1. Run terraform init. It will finish with the message: "Terraform has been successfully initialized!"

    2. Run terraform apply. It will finish with the message: "Apply complete! Resources: (N) added.."

    hashtag
    Results and next steps

    The Kubernetes Connector is now installed.

    1. Return to the Add new integration form from step 1 for EKS, GKE, AKS or self-managed Kubernetes.

    2. The Connector is found by the form, marked by a green checkmark

    circle-check

    You can now integrate Apono with your Kubernetes instance

    Complete the integration with , , or .

    hashtag
    Next Steps

    Return to the , and select one of the following Kubernetes integrations:

    Installing a connector on EKS using CloudFormation

    Installing a connector on Amazon Elastic Kubernetes Service (EKS) for AWS Account or Organization Management

    Apono integrates seamlessly with AWS, using AWS CloudFormation to automate the deployment of all the necessary configurations:

    • Cross-account IAM role with read permissions

    • Amazon SNS topic for event notifications

    • Apono connector, which runs on AWS EKS

    Once installed, the connector syncs data from cloud applications and enables you to manage access permissions through access flows within Amazon EKS.


    hashtag
    Prerequisite

    Item
    Description

    hashtag
    Install a connector

    Follow these steps to install the connector:

    1. On the tab, click AWS. The Connect Integrations Group page appears.

    2. Under Discovery, click Amazon Account or Amazon Organization.

    3. Click one or more resource types to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage Access Flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the Select Connector dropdown menu, click + Add new connector. The Select connector installation strategy section appears.

    3. Click Cloud installation > CloudFormation + Helm (EKS).

    circle-info

    If you are not already signed in, AWS will prompt you to enter your AWS user account.

    1. Define the following Parameters:

      • EKSClusterName: Name of the EKS cluster where the Apono connector will be deployed.

      • EKSIamMode: Authentication mode used by the EKS connector. Possible values include IRSA (IAM Roles for Service Accounts) or Pod Identity.

    Now that you have installed the connector for your Account, you must the connector.


    hashtag
    Deploy the EKS connector

    After installation, the connector must be deployed on your EKS cluster using the Apono Helm chart. You can choose between IRSA or Pod Identity authentication modes, depending on the value you defined for EKSIamMode when (step 8).

    If using IRSA (IAM Roles for Service Accounts), use the Helm chart below to deploy the connector.

    Parameter
    Description

    After deployment, you can now manage access to your AWS Account from Apono.

    circle-info

    If you choose to integrate with the AWS organization, continue to to allow an AWS Account to assume IAM role permissions to manage access across all AWS Organization accounts.


    hashtag
    Deploy Organization roles using CloudFormation

    Using IAM role permissions, you can enable the Apono connector to manage an entire AWS Organization. Deploying Organization roles is optional.

    Follow these steps to deploy your Organization roles:

    1. Log in to the management account for your AWS Organization.

    2. Open the IAM Identity Center in your AWS organization.

    3. Select the relevant AWS account on the left menu.

    After installation, you can now manage access across your AWS Organization from Apono.

    Install an Azure connector on ACI using Azure CLI

    Learn how to deploy a connector in an Azure environment

    Azure Container Instances (ACI) is a managed, serverless compute platform for running containerized applications. This guide explains how to install and configure an Apono connector on ACI in your Azure environment using Azure CLI.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Install a new connector

    You can install a connector for an Azure Management Group or Subscription.

    circle-info

    The connector requires the following roles:

    1. Directory Readers - to validate users in Azure

    2. User Access Administrator - to provision and de-provision access in the Management Group

    Follow these steps to install a new connector:

    1. At the shell prompt, set the environment variables.

    1. Log in to your Azure account.

    Harmony SASE

    Empower just-in-time group membership for Harmony SASE via Apono

    Harmony Secure Access Service Edge (SASE) provides cloud-delivered network security, allowing organizations to streamline secure remote access with precision.

    Integrating Harmony SASE with Apono allows you to automate just-in-time access by temporarily adding users to specific Harmony SASE groups. This ensures the right users get access only when needed and are automatically removed when access expires, which is ideal for short-term projects or incident response. This approach strengthens security, reduces operational overhead, and supports least-privilege access practices.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Integrate Harmony SASE

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click Harmony. The Connect Integration page appears.

    2. Under Discovery, click Next. The Apono connector section expands.

    3. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-info

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (, , , ).

    1. Click Next. The Integration Config page appears.

    2. Enter a unique, alphanumeric, user-friendly Integration Name to identify the integration when constructing an access flow.

    3. From the dropdown menu, select a Region for the config’s activity.

    circle-info

    If you select the Apono secret manager, enter the value of your .

    1. Click Next. The Get more with Apono section expands.

    2. Define the Get more with Apono settings.

    Setting
    Description
    1. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    hashtag
    Usage

    Now that the integration is complete, you can add Harmony SASE to define the grantees or resources in an access flow. This allows only the correct requesters to securely access your Harmony-synced groups, based on the access flow’s approval and provisioning rules.

    Follow the guidance in these articles to define the resource using Harmony SASE.

    Access flow type
    Resources

    Azure MySQL

    Create an integration to manage access to Azure-managed MySQL databases

    MySQL is a reliable and secure open-source relational database system. It serves as the main data store for various applications, websites, and products. This includes mission-critical applications and dynamic websites.

    Microsoft enables developers to create cloud-hosted MySQL databases.

    Through this integration, Apono helps you securely manage access to your Azure MySQL databases.


    hashtag
    Prerequisites

    MySQL

    Create an integration to manage access to a MySQL instance

    The MySQL integration enables you to securely manage just-in-time (JIT) access to your MySQL instance.


    hashtag
    Prerequisites

    Item
    Description

    Microsoft SQL Server

    Create an integration to manage access to a Microsoft SQL Server database

    Microsoft SQL Server is a reliable and secure relational database management system. It can be used as the main data store for various applications, websites, and products.

    Microsoft enables developers to create cloud-hosted SQL Server databases.

    Through this integration, Apono helps you securely manage access to your Microsoft SQL Server database.


    hashtag
    Prerequisites

    MongoDB

    Create an integration to manage access to a MongoDB instance

    The MongoDB integration helps you to securely discover and manage your MongoDB resources through Apono.

    After integrating MongoDB with Apono, you'll be able to:

    • Automate resource discovery and mapping across your MongoDB infrastructure

    • Enable administrators to implement just-in-time, least-privilege access policies and securely manage permissions

    SSH Servers

    Create an integration to manage access to SSH servers

    SSH servers are secure, remote access points that allow users to connect to and manage systems over encrypted connections.

    Through this integration, Apono enables managing secure Just-in-Time (JIT) access to SSH servers. Admins can create access flows for specific SSH servers and define approval processes and access durations for different users, groups, and shifts.

    When a user's access request is approved, Apono creates a certificate that grants access to the server and assigns the requester to the appropriate access group(s). Apono may also use the user's default Linux group.


    hashtag

    Redis Cloud (Redislabs)

    Create an integration to manage access to a Redis Cloud instance

    Redis Cloud is a fully managed, in-memory data store that functions as a database, cache, and message broker. With features such as data persistence, replication, and clustering, Redis Cloud provides high availability and fault tolerance, seamless scalability, and automated maintenance for optimal performance and reliability.

    Through this integration, Apono helps you securely manage access to your Redis Cloud instance.


    hashtag
    Prerequisite

    Installing a GCP connector on GKE using CLI (Helm)

    Deploy the Apono connector with Helm

    Integrating a cloud account with Apono allows you to sync and manage your resources:

    • Discover existing privileges and identities

    • Manage employee and application provisioning to cloud assets and data repositories with delegated approval workflows

    Elasticsearch

    Create an integration to manage access to an Elasticsearch instance

    Elasticsearch is a distributed, RESTful search and analytics engine used to store, index, and analyze large volumes of data in real time. By integrating Elasticsearch with Apono, you can enable temporary access to Elasticsearch for developers, data engineers, and operations teams without compromising security.

    This integration allows Apono to manage just-in-time access to your Elasticsearch indices by authenticating through a connector user with scoped privileges.


    hashtag
    Prerequisites

    F5 Network

    Create an integration to manage access to an F5 instance

    F5 Network provides an application delivery and security platform that optimizes performance, ensures availability, and protects applications across on-premises and cloud environments.

    Through this integration, Apono enables you to dynamically manage access to F5 resources by automating permissions and policies within your F5 Access Policy Manager (APM) instance.


    hashtag
    Prerequisites

    OpenSearch

    Create an integration to manage access to an OpenSearch Integration instance.

    OpenSearch is an open-source search and analytics suite, maintained by Amazon Web Services (AWS).

    Through this integration, Apono helps you discover your OpenSearch Integration resources and securely manage access to the index and roles through your OpenSearch Integration instance.


    hashtag
    Prerequisites

    Integrate with GKE

    Create an integration to manage access to Kubernetes clusters on Google Cloud

    With a Kubernetes cluster in GKE on Google Cloud, GKE handles the complexities of Kubernetes management. Google Cloud provides a reliable, scalable database service.

    Through this integration, Apono helps you securely manage access to your Google Cloud Kubernetes cluster.


    hashtag
    Prerequisites

    Item
    "username": "apono_connector",
    "password": "#PASSWORD"
    gcloud auth login
    export GCP_ORGANIZATION_ID=<GOOGLE_ORGANIZATION_ID>
    export GCP_PROJECT_ID=<GOOGLE_PROJECT_ID>
    export SERVICE_ACCOUNT_NAME=<SERVICE_ACCOUNT_NAME>
    export GCP_ARTIFACT_REPOSITORY_NAME=<ARTIFACT_REPOSITORY_NAME>
    export GCP_CLOUDRUN_SERVICE_NAME=<CLOUDRUN_SERVICE_NAME>
    export GCP_LOCATION=<GCP_LOCATION>
    export APONO_TOKEN=<APONO_TOKEN>
    export APONO_CONNECTOR_ID=<APONO_CONNECTOR_ID>
    docker login registry.apono.io -u apono --password $APONO_TOKEN
    docker pull --platform linux/amd64 registry.apono.io/apono-connector:v1.7.6
    
    export IMAGE_PATH=$GCP_LOCATION-docker.pkg.dev/$GCP_PROJECT_ID/$GCP_ARTIFACT_REPOSITORY_NAME/registry.apono.io/apono-connector:v1.7.6
    
    echo $IMAGE_PATH
    
    docker image tag registry.apono.io/apono-connector:v1.7.6 $IMAGE_PATH
    gcloud auth configure-docker $GCP_LOCATION-docker.pkg.dev
    docker push $IMAGE_PATH
    gcloud run deploy "$GCP_CLOUDRUN_SERVICE_NAME" \
      --image "$IMAGE_PATH" \
      --region="$GCP_LOCATION" \
      --allow-unauthenticated \
      --max-instances=1 \
      --min-instances=1 \
      --cpu=1 \
      --memory=2Gi \
      --no-cpu-throttling \
      --service-account "$SERVICE_ACCOUNT_NAME" \
      --update-env-vars \
      APONO_CONNECTOR_ID="$APONO_CONNECTOR_ID",APONO_TOKEN="$APONO_TOKEN",APONO_URL=api.apono.io

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Credentials Rotation Policy
    Periodic User Cleanup & Deletion

    Copy the token in step listed on the page in step 1.

    NOTE: This value must be the same token that was used in the existing connector. You can also find the APONO_TOKEN on the YAML tab of your Cloud Run service in the GCP console.

    Organization Administrator

    and
    Project connectors) Unique identifier of your GCP project where the Cloud Run service is running
  • Location (GCP_LOCATION): Region where your Cloud Run service and artifact repository are located

  • Customer-defined values:

    • Service account name (SERVICE_ACCOUNT_NAME): Name of the GCP service account used by the connector

    • Artifact repository name (GCP_ARTIFACT_REPOSITORY_NAME): Name of your Docker-format GCP Artifact Registry

    • Cloud Run service name (GCP_CLOUDRUN_SERVICE_NAME): Name of the Cloud Run service where the connector is deployed

    Apono-defined values:

    • Apono connector ID (APONO_CONNECTOR_ID): Unique identifier used when the connector was originally installed

    NOTE: You can find all parameters above on the YAML tab of your Cloud Run service in the GCP console.

    Connectorsarrow-up-right
    Command-line interfacearrow-up-right
    Google Cloud rolearrow-up-right

    Tiers

    Calculation based on the Over Privilege percent, Risk Score, and Privilege Permissions percentage.

    Used For

    Kubernetes

    Learn how to update an existing AWS, Azure, GCP, or Kubernetes connector.

    Organization ID

    Unique identifier for your Elastic Cloud organization

    Integration Owner

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    generate an API keyarrow-up-right
    Organization owner rolearrow-up-right
    AWS
    Azure
    GCP
    Create your secret
    security

    Url

    URL for the RabbitMQ Management Console, excluding the path You may optionally include the protocol (https:// or http://).

    Example: https://b-1a2b3c4d-5e6f-7g8h-9i0j-1k2l3m4n5o6p.mq.us-east-1.amazonaws.com

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    secret
    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    Azure Kubernetes Engine (AKS)
  • Kubernetes (self-managed)

  • Helm

  • kubectl

  • Read more here.
    Azure Kubernetes Engine (AKS)
  • Kubernetes (self-managed)

  • Terraform with the following providers:

    • Helm

    • Kubernetes

    • AWS

  • Kubernetes (self-managed)arrow-up-right
    security
    EKS
    GKE
    AKS
    self-managed Kubernetesarrow-up-right
    Without permissions
    With permissions
    EKSarrow-up-right
    GKEarrow-up-right
    AKSarrow-up-right
    self-managed Kubernetesarrow-up-right
    Catalogarrow-up-right
    Google Kubernetes Engine (GKE)arrow-up-right
    Elastic Kubernetes Service (EKS)arrow-up-right
    Azure Kubernetes Engine (AKS)arrow-up-right
    helm install apono-connector apono-connector --repo https://apono-io.github.io/apono-helm-charts \
        --set-string apono.token=[APONO_TOKEN] \
        --set-string apono.connectorId=[CONNECTOR_NAME] \
        --set serviceAccount.manageClusterRoles=false \
        --namespace apono-connector \
        --create-namespace
    helm install apono-connector apono-connector --repo https://apono-io.github.io/apono-helm-charts \
        --set-string apono.token=[APONO_TOKEN] \
        --set-string apono.connectorId=[CONNECTOR_NAME] \
        --set serviceAccount.manageClusterRoles=true \
        --namespace apono-connector \
        --create-namespace
    module "connector" {
        source = "github.com/apono-io/terraform-modules/k8s/connector-without-permissions/stacks/apono-connector"
        aponoToken = [APONO_TOKEN]
        connectorId = [CONNECTOR_NAME] // choose connector name
    }
    module "connector" {  
        source = "github.com/apono-io/terraform-modules/k8s/connector-with-permissions/stacks/apono-connector"  
        aponoToken = [APONO_TOKEN]  
        connectorId = [CONNECTOR_NAME] // choose connector name  
    }

    Hostname of the Amazon Redshift instance to connect

    Port

    Port value for the instance By default, Apono sets this value to 5439.

    Database Name

    Name of the database

  • Click Next. The Secret Store section expands.

  • Associate the secret or credentials.

  • Click Next. The Get more with Apono section expands.

  • Define the Get more with Apono settings.

    Setting
    Description

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

  • Click Confirm.

  • Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    Apono Connector

    On-prem connection serving as a bridge between an Amazon Redshift instance and Apono Minimum Required Version: 1.3.2 Use the following steps to update an existing connector.

    Secret

    Value generated through AWS or Kubernetes

    Apono does not store credentials. The Apono Connector uses the secret to communicate with services in your environment and separate the Apono web app from the environment for maximal securityarrow-up-right.

    User

    Redshift user for Apono with the CREATEUSER permission

    Amazon Redshift Info

    Information for the Amazon Redshift instance to be integrated:

    • Hostname

    • Port Number

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    Catalogarrow-up-right
    Apono connector
    create access flows
    Troubleshooting Errors
    Amazon Redshift tile

    Hostname

    Open the CloudFormationarrow-up-right stack. The Create stack page appears.

    (Optional) EKSNamespace: Kubernetes namespace in your EKS cluster where the Apono connector service account resides. Defaults to apono-connector if not specified.

  • (Optional) EKSServiceAccountName: Name of the Kubernetes service account associated with the Apono connector in the EKS cluster. Defaults to apono-connector if not specified.

  • Under Capabilities, select I acknowledge that AWS CloudFormation might create IAM resources with custom names.

  • Click Create stack.

  • On the Outputs tab, copy the Value for the ConnectorRoleArnOutput. This value will be used to deploy the connector.

  • On the Connectorsarrow-up-right page, verify that the connector has been deployed.

  • If using Pod Identity, use the Helm chart below to deploy the connector.
    Parameters
    Description

    apono.token string

    Unique token provided by Apono used to authenticate the connector with the Apono platform Learn how to .

    apono.connectorId string

    Unique identifier associated with your Apono account

    This ID links the Helm deployment to the configured connector.

    serviceAccount.manageClusterRoles boolean

    True/false flag that determines whether the Helm chart should create the necessary Kubernetes cluster roles and role bindings automatically

    Copy the organizational ID.
  • In CloudFormationarrow-up-right, open the Quick create stack page.

  • Under Parameters, enter values for the following fields:

    1. AponoConnectorId: Copied from the Helm installation.

    2. ConnectorRoleArn: Value copied from step 11 of Install a connector.

    3. OrganizationalUnitId: Organizational Unit ID obtained in step 4.

  • Click Create stack.

  • On the Connectorsarrow-up-right page, verify that the connector has been deployed.

  • AdminstratorAccess Role

    AWS role with AdministratorAccessarrow-up-right providing full access to AWS services and resources, required for installing the connector

    Full AWS access is not granted to Apono.

    apono.token string

    Unique token provided by Apono used to authenticate the connector with the Apono platform Learn how to create a token.

    apono.connectorId string

    Unique identifier associated with your Apono account

    This ID links the Helm deployment to the configured connector.

    serviceAccount.manageClusterRoles boolean

    A true/false flag that determines whether the connector is allowed to manage access for the Kubernetes cluster.

    serviceAccount.awsRoleArn string

    ARN of the IAM role created through CloudFormation, which the connector’s service account uses to access AWS resources securely

    Catalogarrow-up-right
    deploy
    installing the connector
    Deploy Organization roles using CloudFormation
    Read more about these Microsoft Entra ID roles herearrow-up-right.
    Set the REGION environment variable.
    1. Run the following command to deploy the connector on your ACI.

    1. Add the User Access Administrator role to the connector in the management group scope.

    1. If your Azure resources have resource locksarrow-up-right applied, assign the Tag Contributor role to the connector at the management scope. This allows Apono to add a tag marker during the grant or revoke process.

    1. For Azure AD, add the Directory Readers role to the connector. For Azure AD Groups, add the Groups Administrator and Privileged Role Administrator roles.

    az rest --method POST --uri 'https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments'
    # First role assignment
    az rest --method POST --uri
    
    1. On the Connectorsarrow-up-right page, verify that the connector has been updated.

    You can now integrate with an Azure Management Group or Azure Subscription.

    Follow these steps to install a new connector:

    1. At the shell prompt, set the environment variables.

    1. Log in to your Azure account.

    1. Set the REGION environment variable.

    1. Run the following command to deploy the connector on your ACI.

    1. Add the User Access Administrator role to the connector in the subscription scope.

    1. If your Azure resources have applied, assign the Tag Contributor role to the connector at the subscription scope. This allows Apono to add a tag marker during the grant or revoke process.

    1. For Azure AD, add the Director Readers role to the connector. For Azure AD Groups, add the Groups Administrator and Privileged Role Administrator roles.

    1. On the page, verify that the connector has been updated.

    You can now create integrate with an .

    Apono Token

    Account-specific Apono authentication value

    Use the following steps to obtain your token:

    1. On the Connectorsarrow-up-right page, click Install Connector. The Install Connector page appears.

    2. Click Cloud installation > Azure > Install and Connect Azure Account > CLI (Container Instance).

    3. Copy the token listed on the page in step 1.

    Azure Cloud Command Line Interface (AZ CLI)

    Toolarrow-up-right that enables interacting with Azure services using your command-line shell

    Azure Cloud Information

    Information for your Azure Cloud instance:

    • Subscription IDarrow-up-right

    • Management Group Namearrow-up-right

    • Resource group namearrow-up-right

    Owner Role (Azure RBAC)

    Azure rolearrow-up-right with the following permissions:

    • Grants full access to manage all resources

    • Assigns roles in Azure RBAC

    Global Administrator

    The user following this guide should have an Microsoft Entra rolearrow-up-right with the following permission:

    • Manages all aspects of Microsoft Entra ID and Microsoft services that use Microsoft Entra identities

    ❗Apono does not require Global Administrator access. This is required for the admin following this guide. ❗

    export APONO_CONNECTOR_ID=<A_UNIQUE_CONNECTOR_NAME>
    export APONO_TOKEN=<APONO_TOKEN>
    export SUBSCRIPTION_ID=<AZURE_SUBSCRIPTION_ID>
    export RESOURCE_GROUP_NAME=<AZURE_RESOURCE_GROUP_NAME>
    export MANAGEMENT_GROUP_NAME=<AZURE_MANAGEMENT_GROUP_NAME>
    az login
    export REGION=$(az group show --name $RESOURCE_GROUP_NAME
    export PRINCIPAL_ID=$(az container create --subscription 
    az role assignment create --assignee-object-id $PRINCIPAL_ID --assignee-principal-type
    az role assignment create --assignee-object-id $PRINCIPAL_ID --assignee-principal-type

    Click Next. The Secret Store section expands.

  • Associate the secret or credentials.

  • Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    Harmony account access

    Harmony SASE user account hosted by Perimeter 81 or Check Point

    Apono connector

    On-prem connection serving as a bridge between an SSH server and Apono:

    • AWS

    • Azure

    • GCP

    Learn how to update an existing , , , or connector.

    Harmony API key

    Unique key generated in Harmony platform to authenticate connection with Apono Learn how to generate an API keyarrow-up-right with Harmony. NOTE: When creating the key, select all Key Permissions under the following categories for Apono to access your Harmony instance:

    • Members

    • Groups

    Apono secret

    Value generated with the credentials of the user you create

    Create your secret based on your Redis Cloud API account key and user key:

    • "api_key": <ACCOUNT_KEY>

    • "secret_key": <USER_KEY>

    Apono does not store credentials. The Apono Connector uses the secret to communicate with services in your environment and separate the Apono web app from the environment for maximal .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Self Serve

    • Define permitted requesters

    • Define the resource

    Automatic

    • Define permitted requesters

    • Define the resource

    Catalogarrow-up-right
    AWS
    Azure
    GCP
    Kubernetes
    Harmony SASE workspace API Key
    Harmony tile
    Before starting this integration, create the items listed in the following table.
    Item
    Description

    Apono Connector

    On-prem serving as a bridge between an Azure MySQL database instance and Apono Minimum Required Version: 1.3.0

    MySQL Info

    Information for the database instance to be integrated:

    • Hostname

    • Port Number


    hashtag
    Create a MySQL user

    You must create a user in your MySQL instance for the Apono connector and grant that user permissions to your databases.

    Use the following steps to create a user and grant it permissions:

    1. In your preferred client tool, create a new user. Be sure to set a strong password for the user.

    2. Expose databases to the user. This allows Apono to view database names without accessing the contents of each database.

    3. Grant the user database permissions. The following commands grant Apono the following permissions:

      • Creating users

      • Updating user information and privileges

      • Monitoring and troubleshooting processes running on the database\

    4. Grant the user only one of the following sets of permissions. The chosen set defines the highest level of permissions to provision with Apono. Expand each of the following options to reveal the SQL commands:

    1. (MySQL 8.0+) Grant the service account the authority to manage other roles. This enables Apono to create, alter, and drop roles. However, this role does not inherently grant specific database access permissions.

    1. Using the credentials from step 1, create a secret for the database instance and associate it to the Azure connector.

    You can now integrate Azure MySQL.


    hashtag
    Integrate Azure MySQL

    Azure MySQL
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click Azure MySQL. The Connect Integration page appears.

    2. Under Discovery, click one or more resource types and cloud services to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage Access Flows to these resources.

    1. Click Next. The Apono connector section appears.

    2. From the dropdown menu, select a connector.

    circle-info

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating an Azure connector and associate the secret with the connector.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. .

    5. Click Next. The Custom Access Details section expands.

    6. Define the Get more with Apono settings.

      Setting
      Description
    7. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your Azure MySQL database instance.

    On-prem connection serving as a bridge between a MySQL instance and Apono:

    MySQL Information

    Information for the database instance to be integrated:

    • Hostname

    • Port


    hashtag
    Create MySQL user

    You must create a user in your MySQL instance for the Apono connector and grant that user permissions to your databases.

    Follow these steps to create a user and grant it database permissions:

    1. In your MySQL client tool, create a new user. Use apono_connector or another name of your choosing for the username. Be sure to set a strong password for the user.

    1. Grant the following access to the user. These permissions allow the connector to list databases, manage users, update internal tables, monitor sessions, reload privileges, and handle connection-related operations.

    circle-exclamation

    If the Apono integration needs to manage MySQL users who have the SYSTEM_USER privilege, you must also grant SYSTEM_USER to the Apono connector user.

    Without this permission, operations such as granting roles or modifying such users will fail with an Access denied error.

    1. Grant the user only one of the following sets of permissions. The chosen set defines the highest level of permissions to provision with Apono. Click on each tab to reveal the SQL commands.

    Allows Apono to read data from databases

    Allows Apono to read and modify data

    Allows Apono administrative-level access, including the ability to execute and drop tables

    1. (MySQL 8.0+) Grant the user the authority to manage other roles. This enables Apono to create, alter, and drop roles. However, this role does not inherently grant specific database access permissions.

    1. Create a secret with the credentials from step 1. Use the following key-value pair structure when generating the secret. Be sure to replace #PASSWORD with the actual value. If you used a different name for the user, replace apono-connector with the name you assigned to the user.

    circle-check

    You can also input the user credentials directly into the Apono UI during the integration process.

    You can now integrate your MySQL database.


    hashtag
    Integrate MySQL

    MySQL tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click MySQL. The Connect Integration page appears.

    2. Under Discovery, click one or more resource types to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the resources in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section appears.

    2. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWS, Azure, GCP, Kubernetes).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify the integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. .

    5. Click Next. The Get more with Apono section expands.

    6. Define the Get more with Apono settings.

      Setting
      Description
    7. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your MySQL database.

    Apono Connector

    Item
    Description

    Apono Connector

    On-prem connection serving as a bridge between a Microsoft SQL Server database instance and Apono:

    Microsoft SQL Server Info

    Information for the database instance to be integrated:

    • Hostname

    • Port number


    hashtag
    Create a Microsoft SQL Server user

    You must create a user in your Microsoft SQL Server instance for the Apono connector.

    Use the following steps to create a user and grant it permissions to your databases:

    1. In your preferred client tool, create a new user. Use apono_connector or another name of your choosing for the username. Be sure to set a strong password for the user.

    circle-exclamation

    The password must be a minimum of 8 characters and include characters from at least three of these four categories:

    • Uppercase letters

    • Lowercase letters

    • Digits (0-9)

    • Symbols

    1. Grant the following access to the user. These permissions allow Apono to view database names, modify login information, grant administrative-level access, manage server-level roles, and perform instance-level configuration tasks.

    circle-info

    While these permissions are elevated, they are required for Apono to securely and reliably manage access provisioning across your SQL Server environment.

    1. Using the credentials from step 1, create a secret for the database instance.

    You can now integrate Microsoft SQL Server.


    hashtag
    Integrate Microsoft SQL Server

    Microsoft SQL Server tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click Microsoft SQL Server. The Connect Integration page appears.

    2. Under Discovery, click one or more resource types to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flow to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWS, Azure, GCP, Kubernetes).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. Associate the .

    5. Click Next. The Get more with Apono section expands.

    6. Define the Get more with Apono settings.

      Setting
      Description
    7. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your Microsoft SQL Server database.

    Allow users to request temporary access to specific clusters, roles, databases, and collections

    Review the following prerequisites and implementation steps to complete this integration.


    hashtag
    Prerequisites

    Item
    Description

    Apono Connector

    On-prem connection serving as a bridge between a MongoDB instance and Apono:

    MongoDB Information

    Information for the database instance to be integrated:

    • Hostname

    • Port

    This information can be obtained from a .


    hashtag
    Create a user

    You must create a MongoDB user for the Apono connector.

    Follow these steps to create a user:

    1. In your MongoDB instance, switch to the admin database.

    1. Create a user (user) and password (pwd) for the Apono connector.

    circle-info

    For more information on creating a user, refer to MongoDB's Create a User on Self-Managed Deploymentsarrow-up-right.

    1. Create a secret with the credentials from step 2. Use the following key-value pair structure when generating the secret. Be sure to replace #PASSWORD with the actual value. If you used a different name for the user, replace apono-connector with the name you assigned to the user.

    circle-check

    You can also input the user credentials directly into the Apono UI during the integration process.


    hashtag
    Integrate MongoDB

    MongoDB tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click MongoDB. The Connect Integration page appears.

    2. Under Discovery, select one or multiple resource types.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWS, Azure, GCP, Kubernetes).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. .

    5. Click Next. The Get more with Apono section expands.

    6. Define the Get more with Apono settings.

      Setting
      Description
    7. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your MongoDB instance.

    Prerequisites
    Item
    Description

    Apono Connector

    On-prem connection serving as a bridge between an SSH server and Apono:

    Apono Secret

    Value generated with the credentials of the SSH server user

    based on your SSH server private key in base64 format.

    To find the private key in base64 format, run the following command.

    Apono does not store credentials. The Apono connector uses the secret to communicate with services in your environment and separates the Apono web app from the environment for security.

    User with Key Pair Authentication

    Dedicated SSH server user account that authenticates with SSH key pairs

    In the sudoers file, add the following line to allow Apono to execute commands with sudo privileges without a password prompt.

    JSON List of Servers

    Structured list of SSH servers to which Apono will connect

    The following information should be provided for each server:

    • name: Unique identifier for the server

    • host: IP address or hostname of the server

    User Groups

    (Optional) User groups representing access to the SSH servers

    Default: Default

    The default represents access to the server with the user's default Linux group.


    hashtag
    Integrate SSH servers

    SSH tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click SSH. The Connect Integration page appears.

    2. Under Discovery, click Next. The Apono connector section expands.

    3. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-info

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWS, Azure, GCP, Kubernetes).

    1. Click Next. The Integration Config page appears.

    2. Define the Integration Config settings.

    Setting
    Description

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    Servers

    Minified JSON list of servers

    User Groups

    (Optional) Names of groups in the server representing the sudoer role

    User's Login Shell

    (Optional) Command-line interface program used to log in to an account via SSH

    User Key Name

    (Optional) Filename of the SSH key pair used for authentication

    1. Click Next. The Secret Store section expands.

    2. Associate the secret or credentials.

    3. Click Next. The Get more with Apono section expands.

    4. Define the Get more with Apono settings.

    Setting
    Description

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated

    Learn more about the .

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner

    1. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your SSH instance.

    Item
    Description

    Redis Cloud API

    REST API for managing Redis Cloud programmatically for your account.

    Redis API credentials

    Credentials used to authenticate a Redis REST API request:

    These credentials are required for creating the Apono Secret in the next row.

    Apono Secret

    Value generated with the credentials of the user you create based on your Redis Cloud API account key and user key:

    "api_key": <ACCOUNT_KEY> "secret_key": <USER_KEY>

    Apono does not store credentials. The Apono Connector uses the secret to communicate with services in your environment and separate the Apono web app from the environment for maximal .

    Apono Connector

    On-prem connection serving as a bridge between a Redis Cloud instance and Apono:


    hashtag
    Integrate Redis Cloud (Redislabs)

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click Redis Cloud (Redislabs). The Connect Integration page appears.

    2. Under Discovery, select one or multiple resource types for Apono to discover in all instances of the environment.

    3. Click Next. The Apono connector section expands.

    4. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWS, Azure, GCP, Kubernetes).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. .

    5. Click Next. The Get more with Apono section expands.

    6. Define the Get more with Apono settings.

      Setting
      Description
    7. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your Redis Cloud instance.

    Provide granular permissions to customer-sensitive data

    This article explains how to set up an Apono connector for Google Cloud with Helm.


    hashtag
    Prerequisites

    Item
    Description

    Apono Token

    Account-specific Apono authentication value Use the following steps to obtain your token:

    1. On the page, click Install Connector. The Install Connector page appears.

    2. Click Cloud installation.

    Kubernetes Command Line Tool (kubectl)

    used for communicating with a Kubernetes cluster's control plane

    Google Cloud Command Line Interface (Google Cloud CLI)

    used to manage Google Cloud resources

    Google Cloud Information

    Information for your Google Cloud instance:

    • (Organization)

    • GKE Cluster Namespace

    Owner Role

    that provides Owner permissions for the project or organization


    hashtag
    Create an IAM service account

    Use the following sections to create an IAM service account user for either your Google Project or Google Organization.

    hashtag
    Project

    Follow these steps to create a service account for a Google Project:

    1. Set the environment variables.

    2. In your shell environment, log in to Google Cloud and enable the API.

    3. Create the service account.

    4. Assign the following roles to the service account.

      Role
      Permissions Granted

    hashtag
    Organization

    Follow these steps to create a service account for a Google Organization:

    1. In your shell environment, log in to Google Cloud and enable the API.

    2. Set the environment variables.

    3. Create the service account.

    4. Assign the following roles to the service account.

      Role
      Permissions Granted

    hashtag
    Deploy the connector

    Follow these steps to deploy the Apono connector:

    1. Deploy the Apono connector on a GKE cluster.

    1. Create a new GKE cluster

    2. Connect the GKE cluster.

    1. Verify the GKE cluster is selected as the default cluster. The default cluster is denoted with \*.

    1. Connect the GKE cluster.

    1. Verify the GKE cluster is selected as the default cluster. The default cluster is denoted with \*.

    1. Bind the IAM Service Account to the GKE Service Account.

    1. Deploy Apono connector on your GKE cluster using Helm Chart.

    Item
    Description

    Elasticsearch role

    for the Apono connector with the following privileges.

    Elasticsearch user

    for the Apono connector and assign the role above

    Elasticsearch endpoint

    Unique URL for your Elasticsearch deployment

    Learn how to .

    NOTE: For Elastic Cloud users, the endpoint can be found in the Deployments tab of your Elastic Cloud console.

    Apono connector

    On-prem connection serving as a bridge between a MySQL instance and Apono:

    Apono HTTP proxy

    to manage Elasticsearch The default Elasticsearch capabilities do not include authorization controls and therefore neither does the API. When integrating with Apono using the HTTP Proxy, you will be able to manage access to Elasticsearch using Apono Access Flows.


    hashtag
    Integrate Elasticsearch

    Elasticsearch resource tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click Elasticsearch. The Connect integration page appears.

    2. Under Discovery, select one or more resources to connect to Apono.

    3. Click Next. The Apono connector section expands.

    4. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWS, Azure, GCP, Kubernetes).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. .

    circle-info

    If you select the Apono secret manager, enter the value of the username and password for the apono-connector user.

    1. Click Next. The Get more with Apono section expands.

    2. Define the Get more with Apono settings.

      Setting
      Description

      Credential Rotation

      (Optional) Number of days after which the database credentials must be rotated

      Learn more about the .

    3. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    hashtag
    Usage

    Now that the integration is complete, you can add Elasticsearch to define the resources in an access flow. This allows requesters to access Elasticsearch indices securely based on your approval and provisioning rules.

    Follow the guidance in these articles to define the resource using Elastic Cloud:

    • Define the resource (Self Serve Access Flows)

    • Define the resource (Automatic Access Flows)

    Item
    Description

    F5 Admin Access

    User account with admin permissions to create a new user account

    F5 Network info

    Information for integrating Apono and F5:

    • F5 Hostname

    • Access Profile Id

    • Resource Assign Id

    Apono connector

    On-prem connection serving as a bridge between your Elastic Cloud instance and Apono:


    hashtag
    Create a dedicated Apono user

    Follow these steps to create a dedicated user for Apono:

    1. In F5, create a new admin user accountarrow-up-right with a user-friendly name, such as apono-connector.

    2. Create a secretarrow-up-right for the dedicated user to use during the Apono integration setup. Use the values from step 1 to generate the secret.

    circle-check

    You can also input the user credentials directly into the Apono UI during the integration process.

    You can now integrate F5 Network.


    hashtag
    Integrate F5 Network

    F5 Network tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click F5 Network. The Connect Integration page appears.

    2. Under Discovery, click Next. The Apono connector section expands.

    3. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWS, Azure, GCP, Kubernetes).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. .

    circle-info

    If you select the Apono secret manager, enter the following values:

    • Username: Enter the F5 Apono admin account username.

    • Password: Enter the F5 Apono admin account user password.

    1. Click Next. The Get more with Apono section expands.

    2. Define the Get more with Apono settings.

      Setting
      Description

      Custom Access Details

      (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    3. Click Confirm.

    chevron-right💡 Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    hashtag
    Usage

    Now that the integration is complete, you can use F5 Network to define the resources in an access flow. This allows requesters to access F5 Network resources securely based on your approval and provisioning rules.

    Follow the guidance in these articles to define the resource using F5 Network:

    • Define the resource (Self Serve Access Flows)

    • Define the resource (Automatic Access Flows)

    Item
    Description

    Apono Connector

    On-prem connection serving as a bridge between an OpenSearch Integration instance and Apono:

    OpenSearch Integration Account Access

    OpenSearch Integration account with admin privileges

    OpenSearch Integration

    User for Appono’s connector (User/Password) with assigned roles -


    hashtag
    Create an OpenSearch Integration user

    You must create a user in your OpenSearch Integration instance for the Apono connector and grant that user role to your resources.

    Follow these steps to create a service account for OpenSearch Integration in your Cloud Environment:

    1. Create a user for Apono’s connector

    2. Assign roles: AWS opensearch > security_manager, opensource > all_access To enable the roles: plugins.security.restapi.roles_enabled

    3. Create a new role and provide the following permissions:


    hashtag
    Integrate OpenSearch Integration

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click OpenSearch Integration. The Connect Integration page appears.

    2. Under Discovery, choose Index or/and Role, and click Next. The Apono connector section expands.

    3. From the dropdown menu, select a connector.

    circle-info

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWSarrow-up-right, Azurearrow-up-right, GCParrow-up-right, Kubernetesarrow-up-right).

    1. Click Next. The Integration Config page appears.

    2. Define the Integration Config settings.

    Setting
    Description

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify the integration when constructing an access flow

    Url

    Enter the OpenSearch Url

    1. Click Next. The Secret Store section expands.

    circle-info

    If you select the Apono secret manager, enter the following values: Username: the OpenSearch user you created. Password: the password for the OpenSearch user.

    1. Associate the secret or credentialsarrow-up-right.

    2. Click Next. The 'Get more with Apono' section expands.

    3. Define the Get more with Apono settings.

    Setting
    Description

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated

    Learn more about the .

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources

    Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters.

    To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no is found

    Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource

    Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP, this change is also reflected in Apono.

    1. Click Confirm.

    Now that you have completed this integration, you can create access flowsarrow-up-right that grant permission to your OpenSearch Integration instance.

    Description

    Apono Connector

    On-prem installed on the GKE cluster that serves as a bridge between a Kubernetes cluster and Apono

    Kubernetes Engine Cluster Role

    that grants the Apono connector's service account access to retrieve and list GKE clusters Apono does not require admin permissions to the Kubernetes environment.


    hashtag
    Integrate with Google Kubernetes Engine (GKE)

    Google Kubernetes Engine (GKE) tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click Google Kubernetes Engine (GKE). The Connect Integration page appears.

    2. Under Discovery, click one or more resource types and cloud services to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a GCP connector.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. (User/Password only) .

    circle-info

    When the Apono connector is installed on the GKE cluster, you do not need to enter values for the optional fields or to provide a secret.

    1. Click Next. The Get more with Apono section expands.

    2. Define the Get more with Apono settings.

      Setting
      Description

      Credential Rotation

      (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    3. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your Google Cloud Kubernetes cluster.

    Region

    Region in which the organization runs

    AWS SSO Region

    Region for which your single sign-on is configured

    SSO Portal

    Single sign-on URLarrow-up-right This is required for Apono to generate a sign-in link for end users to use their granted access.

    Management Account Role ARN

    (Optional) ARN (step 5) of the role to assume in the management account

    Exclude Organization Unit IDs

    (Optional) Comma-separated list of organizational unit IDs to exclude Example: ou-aaa1-1111,ou-bbb2-2222

    Session Duration (in hours)

    (Optional) Length of time in hours that Apono’s assumed AWS role remains authenticated and authorized to access your EKS resources before the session expires

    Exclude Account IDs

    (Optional) Comma-separated list of account IDs to exclude Example: 7665544332211,7665544332222,766554433333333

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    install a connector on ECS to manage EKS clusters
    set up access entriesarrow-up-right

    Installing a GCP connector on GKE using Terraform

    Create a connector on Google Kubernetes Engine

    Connectors are secure on-premises components that link Apono to your resources:

    • No secrets are read, cached, or stored

    • No account admin privileges need to be granted to Apono

    • The connector contacts your secret store or key vault to sync data or provision access

    Once set up, this connector will enable you to sync data from cloud applications and grant and revoke access permissions through Google Kubernetes Engine (GKE).


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Install a connector

    Use the following sections to install a connector for either your or .

    hashtag
    Project

    Follow these steps to install an Apono connector for a Google Project:

    1. Set the environment variables.

    1. (Optional) Set the following optional environment variables.

    1. In your shell environment, log in to Google Cloud and enable the API.

    1. In a new or existing Terraform (.tf) file, add the following provider and module information to create a connector.

    1. At the Terraform CLI, download and install the provider plugin and module.

    1. Apply the Terraform changes. The proposed changes and a confirmation prompt will be listed.

    1. Enter yes to confirm deploying the changes to your Google Project instance.

    2. On the page, verify that the connector has been deployed.

    hashtag
    Organization

    Follow these steps to install an Apono connector for a Google Organization:

    1. In your shell environment, log in to Google Cloud and enable the API.

    1. Set the environment variables.

    1. (Optional) Set the following optional environment variables.

    1. In a new or existing Terraform (.tf) file, add the following provider and module information to create a connector.

    1. At the Terraform CLI, download and install the provider plugin and module.

    1. Apply the Terraform changes. The proposed changes and a confirmation prompt will be listed.

    1. Enter yes to confirm deploying the changes to your Google Organization instance.

    2. On the page, verify that the connector has been deployed.


    hashtag
    FAQ

    chevron-rightCan the Apono Terraform module be pinned to a version?hashtag

    Yes. You can append the version number to the source location with the ?ref=vX.X.X query string.

    The following examples pin the version to 1.0.0 for a connector without permissions.

    Azure PostgreSQL

    Create an integration to manage access to Azure-managed PostgreSQL databases

    PostgreSQL databases are open-source relational database management systems emphasizing extensibility and SQL compliance. Microsoft enables developers to create cloud-hosted PostgreSQL databases.

    Through this integration, Apono helps you securely manage access to your Azure PostgreSQL instances.

    To enable Apono to manage Azure PostgreSQL user access, you must create a user and then configure the integration within the Apono UI.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Create a PostgreSQL user

    You must create a user in your PostgreSQL instance for the Apono connector and grant that user permissions to your databases.

    triangle-exclamation

    You must use the admin account and password to connect to your database.

    Use the following steps to create a user and grant it permissions:

    1. In your preferred client tool, create a new user. Use apono_connector for the username. Be sure to set a strong password for the user. You must also grant the azure_pg_admin role to the user in the database instance.

    1. Grant privileges to the azure_pg_admin role on all databases except template0 and azure_sys. This allows Apono to perform tasks that are not restricted to a single schema or object within the database, such as creating, altering, and dropping database objects.

    1. For each database to be managed through Apono, connect to the database and grant azure_pg_admin privileges on all objects in the schemas. This allows Apono to perform tasks that are restricted to schemas within the database, such as modifying table structures, creating new sequences, or altering functions.

    1. Connect to the template1 database and grant azure_pg_admin privileges on all objects in the schemas. For any new databases created in the future, this allows Apono to perform tasks that are restricted to schemas within the database, such as modifying table structures, creating new sequences, or altering functions.

    5. Using the credentials from step 1, for the database instance and associate it to the Azure connector.


    hashtag
    Integrate Azure PostgreSQL

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click Azure PostgreSQL. The Connect Integration page appears.

    2. Under Discovery, click one or more resource types to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage Access Flows to these resources.

    1. Click Next. The Apono connector section appears.

    2. From the dropdown menu, select the connector that has been granted read access to the secret for the PostgreSQL instance.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for and with the connector.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description
    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    Now that you have completed this integration, you can that grant permission to your Azure PostgreSQL instances.

    Installing a GCP connector on Cloud Run using CLI

    Deploy the Docker image of the Apono connector as Cloud Run service

    Cloud Run is a managed compute platform that enables running containerized applications in a fully managed serverless environment.

    This article explains how to setup an Apono connector for Cloud Run with a Docker image.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Create a Cloud Run user

    Use the following sections to create a Cloud Run user for either your or .

    hashtag
    Project

    Follow these steps to create a service account for Cloud Run in a Google Project:

    1. Set the environment variables.

    2. In your shell environment, log in to Google Cloud and enable the API.

    3. Create the service account.

    4. Assign the following roles to the service account.

    Role
    Permissions Granted

    hashtag
    Organization

    Follow these steps to create a service account for Cloud Run in a Google Organization:

    1. In your shell environment, log in to Google Cloud and enable the API.

    2. Set the environment variables.

    3. Create the service account.

    1. Assign the following roles to the service account.

      Role
      Permissions Granted

    hashtag
    Deploy the connector

    Follow these steps to deploy the Apono connector:

    1. Push the connector image to GCP Artifact Registry.

      The following sets of commands push the connector image to the GCP Artifact Registry:

      • New Registry: Use the code on this tab to push the Apono connector Docker image to a new GCP Artifact Registry.

      • Existing Registry

    1. Deploy the Docker image of the Apono connector to the Cloud Run service.

    Vertica

    Create an integration to manage access to a Vertica database

    Vertica is a scalable and high-performance analytics database optimized for fast querying and analysis of large datasets. It delivers speed and flexibility for business intelligence and data warehousing applications.

    Through this integration, Apono helps you securely manage access to your Vertica database and just-in-time (JIT) access to built-in and custom roles.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Create a Vertica user

    You must create a user in your Vertica database instance for the Apono connector and grant that user permissions to the database resources.

    Follow these steps to create a user and grant it permissions:

    1. In your preferred client tool, create a new user. Be sure to set a strong password for the user.

    2. Grant the pseudosuperuser role to the user. This allows Apono to create or drop tables and manage user roles and permissions within the Vertica database.

    3. Using the credentials from step 1, for the database instance.

    circle-info

    Apono does not store credentials. The Apono Connector uses the secret to communicate with services in your environment and separate the Apono web app from the environment for maximal .

    You can now .


    hashtag
    Integrate Vertica

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click Vertica Database. The Connect Integration page appears.

    2. Under Discovery, click Next. The Apono connector section expands.

    3. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (, , , ).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description
    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    Now that you have completed this integration, you can that grant permission to your Vertica database.

    AWS Best Practices

    Scale AWS resource management in access flows

    When granting AWS access permissions, listing individual ARNs in IAM policies can quickly cause you to exceed AWS's inline policy character limitarrow-up-right. Apono solves this through access scopes and the Apono Query Language (AQL). These solutions use regex patterns to efficiently manage resource groups instead of listing individual ARNs.

    For additional protection, Apono has implemented a 100-resource threshold as a guardrail when individual ARN specification is needed.

    The following sections explain how Apono prevents you from exceeding AWS's inline policy limit:

    • Create strategic AWS resource groupings for access flows

    • Understand how Apono provides clear warnings when the AWS policy limit is exceeded

    • Learn how Apono maintains consistent behavior whether your team uses Portal, Teams, or Slack

    For example, instead of individually specifying 200 S3 buckets in a policy (which would exceed AWS's limit), you can use resource tags to group them by environment or function.

    circle-info

    Apono validates for the following types of AWS resources:

    • ASM Secret

    • DynamoDB table


    hashtag
    Prerequisite

    Item
    Description

    hashtag
    Admin Guidance

    When defining access flows that include AWS resources, your resource definition strategy directly impacts policy management.

    hashtag
    Questions

    Before selecting AWS resources for an access flow, consider the following questions:

    • Can all resources of an integration be selected?

    • Have tags been applied to logically group resources by environment, function, or team?

    • Can an be created to group resources across multiple AWS integrations?

    hashtag
    Resource Definition Strategies

    To effectively manage AWS permissions while avoiding policy character limits, you can use access scopes, integrations, or bundles. When possible, we strongly recommend using access scopes or AQL.

    The following table explains the strategy for each approach.

    Type
    Strategy

    hashtag
    Apono Safeguard

    If you select too many AWS resources for an access flow, the Apono UI will display a warning message instructing you to reduce the number of selected resources.

    Access Flow
    Conditions

    hashtag
    Requestor Guidance

    When requesting access to many AWS resources, Apono will warn you if you have selected too many AWS resources.

    You will receive different notifications about AWS resource limits depending on which platform you use to submit your access request:

    • Portal & Teams: Apono displays a warning before submission when you click Request, preventing requests that exceed the limit.

    circle-info

    In some cases, the request might pass initial validation but still trigger a post-submission notification to select fewer resources.

    • Slack: Apono processes your request first, then sends a message if you need to resubmit with fewer resources.

    hashtag
    Known Limitations While Building Access Flows, Bundles, and Access Scopes

    The following configurations within access flows or when bundling multiple resources will exceed AWS policy size constraints.

    • Specifying resources by name or ID: Selecting specific resource names or IDs one by one.

    • S3 buckets: as AWS does not support tagging buckets, it should be handled with region tags or through access scopes or AQL patterns where possible.

    • Excluding a list of resource names or ID: choosing a list of resources to exclude can similarly inflate policy size and is best handled through access scopes or AQL patterns where possible.

    PostgreSQL

    Create an integration to manage access your PostgreSQL databases

    PostgreSQL databases are open-source relational database management systems emphasizing extensibility and SQL compliance.

    Through this integration, Apono helps you securely manage access to your PostgreSQL instance.

    To enable Apono to manage PostgreSQL user access, you must create a user and then configure the integration within the Apono UI.

    circle-info

    If your PostgreSQL instance runs on a cloud service, follow one of these guides:


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Create a PostgreSQL user

    You must create a user in your PostgreSQL instance for the Apono connector.

    circle-exclamation

    You must use the admin account and password to connect to your database.

    Follow these steps to create a user and grant it permissions:

    1. In your preferred client tool, create a new user. Use apono_connector for the username. Be sure to set a strong password for the user. You must also grant the SUPERUSER role to the user in the database instance.

    2. Using the credentials from step 1, for the database instance.

    circle-check

    You can also input the user credentials directly into the Apono UI during the .


    hashtag
    Integrate PostgreSQL

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click PostgreSQL. The Connect Integration page appears.

    2. Under Discovery, select one or multiple resource types for Apono to discover in all instances of the environment.

    3. Click Next. The Apono connector section expands.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (, , , ).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description
    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    ​

    Now that you have completed this integration, you can that grant permission to your PostgreSQL instance.​

    Databricks

    Create an integration to manage access to Databricks resources

    Apono enables you to automate and control access to Databricks by dynamically managing group memberships through just-in-time access flows. This ensures that data analysts, data scientists, and engineers receive only the temporary, task-based access they need to work with sensitive datasets.

    With Apono’s Databricks integration, you can streamline access requests, approvals, and lifecycle management for Databricks groups:

    • Enable self-service access requests by controlling resource access through Databricks group memberships

    • Enforce zero standing privileges by automatically revoking expired access

    • Discover and manage permissions across Databricks groups


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Integrate Databricks

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click Databricks. The Connect Integration page appears.

    2. Under Discovery, click Next. The Apono connector section expands.

    3. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-info

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (, , , ).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description
    circle-info

    If you select the Apono secret manager, enter your Databricks Secret and Client Id.

    1. Click Next. The Get more with Apono section expands.

    2. Define the Get more with Apono settings.

      Setting
      Description
    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    Now that you have completed this integration, you can that manage Databricks group memberships to control access to resources.

    GitHub

    Create an integration to manage access to GitHub repositories and roles

    GitHub is a code hosting and collaboration platform that enables developers to manage project versions, track changes, and collaborate on software development.

    Through this integration, Apono helps you securely manage access to your GitHub repositories, organizational, team and owner roles.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Integrate Github

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click GitHub. The Connect Integration page appears.

    2. Under Discovery, select one or multiple resource types.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (, , , ).

    1. Click Next. The Integration Config page appears.

    2. Define the Integration Config settings.

      Setting
      Description
    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    Now that you have completed this integration, you can that grant permission to your GitHub instance.

    MariaDB

    Create an integration to manage access to a MariaDB instance

    The MariaDB integration enables you to securely manage just-in-time (JIT) access to roles, databases, and tables within your MariaDB instance.


    hashtag
    Prerequisite

    Item
    Description

    hashtag
    Create a MariaDB user

    You must create a user in your MariaDB instance for the Apono connector and grant that user permissions to your databases.

    Follow these steps to create a user and grant it permissions:

    1. In your preferred client tool, create a new user. Use apono_connector or another name of your choosing for the username. Be sure to set a strong password for the user.

    1. Grant the following access to the user. These permissions allow the connector to list databases, manage users, update internal tables, monitor sessions, reload privileges, and handle connection-related operations.

    1. Grant the user only one of the following sets of permissions. The chosen set defines the highest level of permissions to provision with Apono. Click on each tab to reveal the SQL commands.

    Allows Apono to read data from databases

    Allows Apono to read and modify data

    Allows Apono administrative-level access, including the ability to execute and drop tables

    1. with the credentials from step 1. Use the following key-value pair structure when generating the secret. Be sure to replace #PASSWORD with the actual value. If you used a different name for the user, replace apono-connector with the name you assigned to the user.

    circle-check

    You can also input the user credentials directly into the Apono UI during the integration process.

    You can now .


    hashtag
    Integrate MariaDB

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click MariaDB. The Connect Integration page appears.

    2. Under Discovery, click Next. The Apono connector section expands.

    3. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (, , , ).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description
    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    Now that you have completed this integration, you can that grant permission to your MariaDB database.

    Install an Azure connector on ACI using PowerShell

    Learn how to deploy a connector in an Azure environment

    Azure Container Instances (ACI) is a managed, serverless compute platform for running containerized applications. This guide explains how to install and configure an Apono connector on ACI in your Azure environment using PowerShell.


    hashtag
    Prerequisites

    Item
    Description

    MongoDB Atlas

    Create an integration to manage access to a MongoDB Atlas instance

    MongoDB Atlas is a fully managed and scalable cloud database service. It provides a flexible and secure platform for storing and managing data across various applications.

    Developers can easily deploy, manage, and scale MongoDB databases in the cloud. Features like automated backups, global clusters, and real-time monitoring simplify database management.

    Through this integration, Apono helps you discover and securely manage access to the resources in your MongoDB Atlas instance.


    hashtag

    Apono Connector for GCP

    How to install a Connector on a GCP Project to integrate a GCP Organization or Project with Apono with Helm

    To and start managing JIT access to GCP cloud resources, you must first install a connector in your GCP environment.

    The GCP connector must be installed on a GKE cluster. You can do this with CLI or with GCP Deployment Manager in the GCP Portal. The Apono connector will require permissions to the organization or to a specific project, depending on the level of access management you want to achieve with Apono.

    • To manage access to a single GCP Project, install a connector in a GKE cluster on that project and give the connector the appropriate role to the project. Follow .

    CloudSQL - MySQL

    Create an integration to manage access to Cloud SQL MySQL databases

    MySQL is a reliable and secure open-source relational database system. It serves as the main data store for various applications, websites, and products. This includes mission-critical applications and dynamic websites. With Cloud SQL, users benefit from Google Cloud's robust infrastructure, which ensures high availability, security, and scalability for their databases.

    Through this integration, Apono helps you securely manage access to your Cloud SQL MySQL databases.


    hashtag
    Prerequisites

    "username": "REDSHIFT_USERNAME", 
    "password": "PASSWORD"
    CREATE USER apono_connector WITH PASSWORD 'password';
    ALTER USER apono_connector WITH CREATEUSER;
    helm install apono-connector apono-connector --repo https://apono-io.github.io/apono-helm-charts \
      --set-string apono.token=[APONO_TOKEN] \
      --set-string apono.connectorId=[CONNECTOR_ID] \
      --set serviceAccount.manageClusterRoles=[true/false] \
      --set-string serviceAccount.awsRoleArn=[CONNECTOR_ROLE_ARN_OUTPUT] \
      --namespace apono-connector \
      --create-namespace
    helm install apono-connector apono-connector --repo https://apono-io.github.io/apono-helm-charts \
      --set-string apono.token=[APONO_TOKEN] \
      --set-string apono.connectorId=[CONNECTOR_ID] \
      --set serviceAccount.manageClusterRoles=[true/false] \
      --namespace apono-connector \
      --create-namespace
    export APONO_CONNECTOR_ID=<A_UNIQUE_CONNECTOR_NAME>
    export APONO_TOKEN=<APONO_TOKEN>
    export SUBSCRIPTION_ID=<AZURE_SUBSCRIPTION_ID>
    export RESOURCE_GROUP_NAME=<AZURE_RESOURCE_GROUP_NAME>
    az login
    CREATE USER 'apono_connector'@'%' IDENTIFIED BY 'password';
    GRANT SHOW DATABASES ON *.* TO 'apono_connector'@'%';
    GRANT SELECT ON *.* TO 'apono_connector'@'%';
    GRANT GRANT OPTION ON *.* TO 'apono_connector'@'%';
    GRANT SELECT,ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,INDEX,INSERT,TRIGGER,UPDATE ON *.* TO 'apono_connector'@'%';
    GRANT GRANT OPTION ON *.* TO 'apono_connector'@'%';
    GRANT EXECUTE,DROP,SELECT,ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,INDEX,INSERT,TRIGGER,UPDATE ON *.* TO 'apono_connector'@'%';
    GRANT GRANT OPTION ON *.* TO 'apono_connector'@'%';
    GRANT ROLE_ADMIN on *.* to 'apono_connector';
    GRANT SYSTEM_USER ON *.* TO 'apono_connector'@'%';
    GRANT SELECT ON *.* TO 'apono_connector'@'%';  
    GRANT GRANT OPTION ON *.* TO 'apono_connector'@'%';
    GRANT SELECT,ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,INDEX,INSERT,TRIGGER,UPDATE ON *.* TO 'apono_connector'@'%';  
    GRANT GRANT OPTION ON *.* TO 'apono_connector'@'%';
    GRANT EXECUTE,DROP,SELECT,ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,INDEX,INSERT,TRIGGER,UPDATE ON *.* TO 'apono_connector'@'%';  
    GRANT GRANT OPTION ON *.* TO 'apono_connector'@'%';
    CREATE USER 'apono_connector'@'%' IDENTIFIED BY 'password';
    GRANT SHOW DATABASES ON *.* TO 'apono_connector'@'%';
    GRANT CREATE USER ON *.* TO 'apono_connector'@'%';  
    GRANT UPDATE ON mysql.* TO 'apono_connector'@'%';  
    GRANT PROCESS ON *.* TO 'apono_connector'@'%';
    GRANT RELOAD ON *.* TO 'apono_connector'@'%';
    GRANT CONNECTION ADMIN ON *.* TO 'apono_connector'@'%';
    GRANT ROLE_ADMIN on *.* to apono_connector;
    "username": "apono-connector",
    "password": "#PASSWORD"
    CREATE LOGIN apono_connector WITH PASSWORD = 'password';
    GRANT VIEW ANY DATABASE TO apono_connector;
    USE master GRANT ALTER ANY LOGIN TO apono_connector;
    USE master GRANT CONTROL SERVER TO apono_connector;
    USE master ALTER SERVER ROLE securityadmin ADD MEMBER apono_connector;
    USE master ALTER SERVER ROLE serveradmin ADD MEMBER apono_connector;
    use admin;
    db.createUser({
        user: "apono-connector",
        pwd: "password",
        roles: [
            {
                "role" : "clusterMonitor",
                "db" : "admin"
            },
            {
                "role" : "userAdminAnyDatabase",
                "db" : "admin"
            },
            {
                "role" : "readWriteAnyDatabase",
                "db" : "admin"
            },
            {
                "role" : "clusterManager",
                "db" : "admin"
            }
        ]
    });
    "username": "apono-connector",
    "password": "#PASSWORD"
    export GCP_PROJECT_ID=<GOOGLE_PROJECT_ID>
    export APONO_TOKEN=<YOUR_APONO_TOKEN>
    export APONO_CONNECTOR_ID=<A_UNIQUE_CONNECTOR_NAME>
    export NAMESPACE=<GKE_CLUSTER_NAMESPACE>
    export SERVICE_ACCOUNT_NAME=<SERVICE_ACCOUNT_NAME>
    gcloud auth login 
    gcloud services enable cloudresourcemanager.googleapis.com --project $GCP_PROJECT_ID
    gcloud services enable cloudasset.googleapis.com --project $GCP_PROJECT_ID
    gcloud services enable cloudidentity.googleapis.com --project $GCP_PROJECT_ID
    gcloud services enable admin.googleapis.com --project $GCP_PROJECT_ID
    gcloud iam service-accounts create $SERVICE_ACCOUNT_NAME --project $GCP_PROJECT_ID
    gcloud alpha auth login
    gcloud services enable cloudresourcemanager.googleapis.com
    gcloud services enable cloudasset.googleapis.com
    gcloud services enable cloudidentity.googleapis.com
    gcloud services enable admin.googleapis.com
    export GCP_PROJECT_ID=<GOOGLE_PROJECT_ID>
    export GCP_ORGANIZATION_ID=<GOOGLE_ORGANIZATION_ID>
    export APONO_TOKEN=<YOUR_APONO_TOKEN>
    export APONO_CONNECTOR_ID=<A_UNIQUE_CONNECTOR_NAME>
    export NAMESPACE=<GKE_CLUSTER_NAMESPACE>
    export SERVICE_ACCOUNT_NAME=<SERVICE_ACCOUNT_NAME>
    gcloud iam service-accounts create $SERVICE_ACCOUNT_NAME --project $GCP_PROJECT_ID
    gcloud container clusters create CLUSTER_NAME
    gcloud container clusters get-credentials CLUSTER_NAME --region REGION --project $GCP_PROJECT_ID
    kubectl get-contexts
    gcloud iam service-accounts add-iam-policy-binding $SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com \
        --member="serviceAccount:$GCP_PROJECT_ID.svc.id.goog[$NAMESPACE/apono-connector-service-account]" \
        --role="roles/iam.workloadIdentityUser" \
        --project $GCP_PROJECT_ID
    helm install apono-connector apono-connector --repo https://apono-io.github.io/apono-helm-charts \
        --set resources.limits.cpu=1 \
        --set resources.limits.memory=2Gi \
        --set resources.requests.cpu=1 \
        --set resources.requests.memory=2Gi \
        --set-string apono.token=$APONO_TOKEN \
        --set-string apono.connectorId=$APONO_CONNECTOR_ID \
        --set-string serviceAccount.gcpServiceAccountEmail=$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com \
        --namespace $NAMESPACE \
        --create-namespace
    "username": "F5_USERNAME",
    "password": "F5_PASSWORD"
    "cluster:monitor/state"
    "cluster:monitor/health"
    Kubernetes
    AWS
    Azure
    GCP
    Kubernetes
    security

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Credentials Rotation Policy
    Periodic User Cleanup & Deletion
    create a token

    Webtop path

  • (Optional) Webtop Sections path

  • Kubernetes

    Learn how to update an existing AWS, Azure, GCP, or Kubernetes connector.

    F5 Hostname

    Host and port of the F5 server hosting the Access Policy Manager (APM)

    Access Profile Id

    Identifier of the top-level access policy container that defines authentication and access logic

    Resource Assign Id

    Identifier of the rule set that represents the permission and assignment of webtop links to users

    Webtop

    Path to the Webtop object that presents assigned applications and links after user authentication

    Webtop Sections (Optional)

    Path used to group or organize webtop links within the Webtop resource

    Integration Owner

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    AWS
    Azure
    GCP

    Kubernetesarrow-up-right

    Minimum Required Version: 1.4.0 Learn how to update an existing AWSarrow-up-right, Azurearrow-up-right, GCParrow-up-right, or Kubernetesarrow-up-right connector.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    AWSarrow-up-right
    Azurearrow-up-right
    GCParrow-up-right
    https://docs.opensearch.org/docs/latest/security/access-control/users-roles/arrow-up-right
    Credentials Rotation Policyarrow-up-right
    Periodic User Cleanup & Deletionarrow-up-right
    resource ownerarrow-up-right
    resource ownersarrow-up-right

    Server URL

    (Optional) URL of the server where the cluster is deployed Leave this field blank to connect the cluster where the Apono connector is deployed.

    Certificate Authority

    (Optional) Ensures that the Kubernetes API server you are communicating with is trusted and authentic Leave this field blank to connect the cluster where the Apono connector is deployed.

    Project ID

    (Optional) ID of the GCP project where the cluster is deployed

    Region

    (Optional) Location where the cluster is deployed

    Cluster Name

    (Optional) Name of the cluster to connect The cluster name should be the same as it appears in GKE.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    connection
    Google Cloud rolearrow-up-right
    Credentials Rotation Policy
    --query
    location
    --output
    tsv
    )
    $SUBSCRIPTION_ID
    --resource-group
    $RESOURCE_GROUP_NAME
    --name
    $APONO_CONNECTOR_ID
    --ports
    80
    --os-type
    linux
    --image
    registry.apono.io/apono-connector:v1.7.6
    --environment-variables
    APONO_CONNECTOR_ID=
    $APONO_CONNECTOR_ID
    APONO_TOKEN=
    $APONO_TOKEN
    APONO_URL=api.apono.io
    CONNECTOR_METADATA=
    '
    {"cloud_provider":"AZURE","subscription_id":"
    '"
    $SUBSCRIPTION_ID
    "'
    ","resource_group":"
    '"
    $RESOURCE_GROUP_NAME
    "'
    ","region":"
    '"
    $REGION
    "'
    ","is_azure_admin":true}
    '
    --cpu
    1
    --memory
    2
    --registry-login-server
    registry.apono.io
    --registry-username
    apono
    --registry-password
    $APONO_TOKEN
    --location
    $REGION
    --assign-identity
    --query
    identity.principalId
    --output
    tsv
    )
    ServicePrincipal
    --role
    "
    User Access Administrator
    "
    --scope
    /providers/Microsoft.Management/managementGroups/
    $MANAGEMENT_GROUP_NAME
    ServicePrincipal
    --role
    "
    Tag Contributor
    "
    --scope
    /providers/Microsoft.Management/managementGroups/
    $MANAGEMENT_GROUP_NAME
    --body
    '
    {"principalId": "
    '"
    $PRINCIPAL_ID
    "'
    ", "roleDefinitionId": "88d8e3e3-8f55-4a1e-953a-9b9898b8876b", "directoryScopeId": "/"}
    '
    '
    https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
    '
    --body
    '
    {"principalId": "
    '"
    $PRINCIPAL_ID
    "'
    ", "roleDefinitionId": "fdd7a751-b60b-444a-984c-02652fe8fa1c", "directoryScopeId": "/"}
    '
    # Second role assignment
    az rest --method POST --uri 'https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments' --body '{"principalId": "'"$PRINCIPAL_ID"'", "roleDefinitionId": "e8611ab8-c189-46e8-94e1-60213ab1f814", "directoryScopeId": "/"}'
    resource locksarrow-up-right
    Connectorsarrow-up-right
    Azure Management Group or Azure Subscription
    Click
    Cloud installation > GCP > Install and Connect GCP Project > CLI (Cloud Run)
    .
  • Copy the token listed on the page in step 1.

  • Service Account Name

    role/secretmanager.secretAccessor

    • Access secret versions

    • Read the secret data

    roles/iam.securityAdmin

    • Manage IAM policies, roles, and service accounts

    • Set and update IAM policies

    • Grant, modify, and revoke IAM roles for users and service accounts

    role/secretmanager.secretAccessor

    • Access secret versions

    • Read the secret data

    roles/iam.securityAdmin

    • Manage IAM policies, roles, and service accounts

    • Set and update IAM policies

    • Grant, modify, and revoke IAM roles for users and service accounts

    roles/browser

    • List resources within the organization

    • View metadata

    Connectorsarrow-up-right
    Command-line toolarrow-up-right
    Command-line interfacearrow-up-right
    Organization IDarrow-up-right
    Project IDarrow-up-right
    Google Cloud rolearrow-up-right

    Apono Token

    Account-specific Apono authentication value Use the following steps to obtain your token:

    1. On the Connectorsarrow-up-right page, click Install Connector. The Install Connector page appears.

    2. Click Cloud installation.

    3. Click Cloud installation > GCP > Install and Connect GCP Project > CLI (GKE).

    4. Copy the token listed on the page in step 1.

    Google Cloud Command Line Interface (Google Cloud CLI)

    Command-line interfacearrow-up-right used to manage Google Cloud resources

    Google Cloud Information

    Information for your Google Cloud instance:

    • (Organization) Organization IDarrow-up-right

    • Project IDarrow-up-right

    • Google Cloud Region

    • GKE Cluster Name

    • GKE Cluster Region

    • Tag Key-Value Pairs (if used)

    Optional:

    • Apono Connector ID

    • Service Account Name

    • Namespace

    Owner Role

    Google Cloud rolearrow-up-right that provides Owner permissions for the project or organization

    Google Project
    Google Organization
    Connectorsarrow-up-right
    Connectorsarrow-up-right
    : Use the code on this tab to push the Apono connector Docker image to an existing Docker-format GCP Artifact Registry

    Apono Token

    Account-specific Apono authentication value Use the following steps to obtain your token:

    1. On the Connectorsarrow-up-right page, click Install Connector. The Install Connector page appears.

    2. Click Cloud installation.

    3. Click Cloud installation > GCP > Install and Connect GCP Project > CLI (Cloud Run).

    4. Copy the token listed on the page in step 1.

    Kubernetes Command Line Tool (kubectl)

    Command-line toolarrow-up-right used for communicating with a Kubernetes cluster's control plane

    Google Cloud Command Line Interface (Google Cloud CLI)

    Command-line interfacearrow-up-right used to manage Google Cloud resources

    Google Cloud Information

    Information for your Google Cloud instance

    Google-defined Values:

    • (Organization) Organization IDarrow-up-right

    • Project IDarrow-up-right

    • Google Cloud Location

    Customer-defined Values:

    • Service Account Name

    • Artifact Repository Name

    • Cloud Run Service Name

    Google Cloud Roles

    Google Cloud rolearrow-up-right that provides Owner permissions for the project or organization

    Project Implementation Role:

    • Owner

    Organization Implementation Roles:

    • Owner

    • Organization Administrator

    role/secretmanager.secretAccessor

    • Access secret versions

    • Read the secret data

    roles/iam.securityAdmin

    • Manage IAM policies, roles, and service accounts

    • Set and update IAM policies

    • Grant, modify, and revoke IAM roles for users and service accounts

    role/secretmanager.secretAccessor

    • Access secret versions

    • Read the secret data

    roles/iam.securityAdmin

    • Manage IAM policies, roles, and service accounts

    • Set and update IAM policies

    • Grant, modify, and revoke IAM roles for users and service accounts

    roles/browser

    • List resources within the organization

    • View metadata

    Google Project
    Google Organization

    Kubernetes

    Minimum Required Version: 1.4.0

    Learn how to update an existing AWS, Azure, GCP, or Kubernetes connector.

    user: (Optional) Username for the SSH connection. Default: apono

  • port: (Optional) SSH port number. Default: 22

  • tags: (Optional) Labels for grouping server resources for dynamic access management.

  • must be defined.
    must also be defined.
    Integration Config Metadataarrow-up-right
    AWS
    Azure
    GCP
    Create your secret
    maximal
    Credentials Rotation Policy
    Periodic User Cleanup & Deletion
    resource owner
    resource owners

    Kubernetes

    Learn how to update an existing AWS, Azure, GCP, or Kubernetes connector.

    URL

    Unique URL for your Elasticsearch deployment

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    Create a rolearrow-up-right
    Create a userarrow-up-right
    access the Elasticsearch endpointarrow-up-right
    AWS
    Azure
    GCP
    Authorization controls
    Credentials Rotation Policy
    export REGION=$(az group show --name $RESOURCE_GROUP_NAME --query location --output tsv)
    export PRINCIPAL_ID=$(az container create --subscription $SUBSCRIPTION_ID --resource-group $RESOURCE_GROUP_NAME --name $APONO_CONNECTOR_ID --ports 80 --os-type linux --image registry.apono.io/apono-connector:v1.7.6 --environment-variables APONO_CONNECTOR_ID=$APONO_CONNECTOR_ID APONO_TOKEN=$APONO_TOKEN APONO_URL=api.apono.io CONNECTOR_METADATA='{"cloud_provider":"AZURE","subscription_id":"'"$SUBSCRIPTION_ID"'","resource_group":"'"$RESOURCE_GROUP_NAME"'","region":"'"$REGION"'","is_azure_admin":true}' --cpu 1 --memory 2 --registry-login-server registry.apono.io --registry-username apono --registry-password $APONO_TOKEN --location $REGION --assign-identity --query identity.principalId --output tsv)
    az role assignment create --assignee-object-id $PRINCIPAL_ID --assignee-principal-type ServicePrincipal --role "User Access Administrator" --scope /subscriptions/$SUBSCRIPTION_ID
    az role assignment create --assignee-object-id $PRINCIPAL_ID --assignee-principal-type ServicePrincipal --role "Tag Contributor" --scope /subscriptions/$SUBSCRIPTION_ID
    az rest --method POST --uri 'https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments' --body '{"principalId": "'"$PRINCIPAL_ID"'", "roleDefinitionId": "88d8e3e3-8f55-4a1e-953a-9b9898b8876b", "directoryScopeId": "/"}'
    # First role assignment
    az rest --method POST --uri 'https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments' --body '{"principalId": "'"$PRINCIPAL_ID"'", "roleDefinitionId": "fdd7a751-b60b-444a-984c-02652fe8fa1c", "directoryScopeId": "/"}'
    
    # Second role assignment
    az rest --method POST --uri 'https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments' --body '{"principalId": "'"$PRINCIPAL_ID"'", "roleDefinitionId": "e8611ab8-c189-46e8-94e1-60213ab1f814", "directoryScopeId": "/"}'
    gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
        --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
        --role="roles/secretmanager.secretAccessor" \
        --project $GCP_PROJECT_ID
    
    gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
        --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
        --role="roles/iam.securityAdmin" \
        --project $GCP_PROJECT_ID
    gcloud organizations add-iam-policy-binding $GCP_ORGANIZATION_ID \
        --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
        --role="roles/secretmanager.secretAccessor"
    
    gcloud organizations add-iam-policy-binding $GCP_ORGANIZATION_ID \
        --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
        --role="roles/iam.securityAdmin"
        
    gcloud organizations add-iam-policy-binding $GCP_ORGANIZATION_ID \
        --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
        --role="roles/browser"
    gcloud container clusters get-credentials CLUSTER_NAME --region REGION --project $GCP_PROJECT_ID
    kubectl get-contexts
    export TF_VAR_PROJECT_ID="<GCP_PROJECT_ID>"
    export TF_VAR_REGION="<GCP_REGION>"
    export TF_VAR_NAME="<GKE_CLUSTER_NAME>"
    export TF_VAR_LOCATION="<GCP_CLUSTER_REGION>"
    export TF_VAR_APONO_TOKEN="<APONO_TOKEN>"
    export TF_VAR_TAGS="<{tag1="value1"}>"
    export TF_VAR_CONNECTOR_ID="<APONO_CONNECTOR_NAME>"
    export TF_VAR_SERVICE_ACCOUNT_NAME="<GCP_SERVICE_ACCOUNT_NAME>"
    export TF_VAR_NAMESPACE="<NAMESPACE>"
    gcloud auth login 
    gcloud services enable cloudresourcemanager.googleapis.com --project $GCP_PROJECT_ID
    gcloud services enable cloudasset.googleapis.com --project $GCP_PROJECT_ID
    gcloud services enable cloudidentity.googleapis.com --project $GCP_PROJECT_ID
    gcloud services enable admin.googleapis.com --project $GCP_PROJECT_ID
    provider "google" {
      project     = "{var.PROJECT_ID}"
      region      = "{var.REGION}"
    }
    
    data "google_client_config" "provider" {}
    
    data "google_container_cluster" "gke" {
      name     = "{var.NAME}"
      location = "{var.LOCATION}"
    }
    
    provider "helm" {
      kubernetes{
        host  = "https://${data.google_container_cluster.gke.endpoint}"
        token = data.google_client_config.provider.access_token
        cluster_ca_certificate = base64decode(
          data.google_container_cluster.gke.master_auth[0].cluster_ca_certificate,)
        exec {
          api_version = "client.authentication.k8s.io/v1beta1"
          command     = "gke-gcloud-auth-plugin"
        }
      }
    }
    
    module "apono-connector" {
      source = "github.com/apono-io/terraform-modules//gcp/organization-wide-connector/gke/stacks/apono-connector"
      connectorId = "{var.CONNECTOR_ID}" //OPTIONAL
      aponoToken = "{var.APONO_TOKEN}"
      projectId = "{var.PROJECT_ID}"
      serviceAccountName = "{var.SERVICE_ACCOUNT_NAME}" //OPTIONAL
      namespace = "{var.NAMESPACE}" //OPTIONAL
      tags = "{var.TAGS}"
    }
    terraform init
    terraform apply
    gcloud alpha auth login
    gcloud services enable cloudresourcemanager.googleapis.com
    gcloud services enable cloudasset.googleapis.com
    gcloud services enable cloudidentity.googleapis.com
    gcloud services enable admin.googleapis.com
    export TF_VAR_PROJECT_ID="<GCP_PROJECT_ID>"
    export TF_VAR_REGION="<GCP_REGION>"
    export TF_VAR_NAME="<GKE_CLUSTER_NAME>"
    export TF_VAR_LOCATION="<GCP_CLUSTER_REGION>"
    export TF_VAR_APONO_TOKEN="<APONO_TOKEN>"
    export TF_VAR_ORGANIZATION_ID="<GCP_ORGANIZATION_ID>"
    export TF_VAR_TAGS="<{tag1="value1"}>"
    export TF_VAR_CONNECTOR_ID="<APONO_CONNECTOR_NAME>"
    export TF_VAR_SERVICE_ACCOUNT_NAME="<GCP_SERVICE_ACCOUNT_NAME>"
    export TF_VAR_NAMESPACE="<NAMESPACE>"
    provider "google" {
      project     = "{var.PROJECT_ID}"
      region      = "{var.REGION}"
    }
    
    data "google_client_config" "provider" {}
    
    data "google_container_cluster" "gke" {
      name     = "{var.NAME}"
      location = "{var.LOCATION}"
    }
    
    provider "helm" {
      kubernetes{
        host  = "https://${data.google_container_cluster.gke.endpoint}"
        token = data.google_client_config.provider.access_token
        cluster_ca_certificate = base64decode(
          data.google_container_cluster.gke.master_auth[0].cluster_ca_certificate,)
        exec {
          api_version = "client.authentication.k8s.io/v1beta1"
          command     = "gke-gcloud-auth-plugin"
        }
      }
    }
    
    module "apono-connector" {
      source = "github.com/apono-io/terraform-modules//gcp/organization-wide-connector/gke/stacks/apono-connector"
      connectorId = "{var.CONNECTOR_ID}" //OPTIONAL
      aponoToken = "{var.APONO_TOKEN}"
      projectId = "{var.PROJECT_ID}"
      organizationId = "{var.ORGANIZATION_ID}"
      serviceAccountName = "{var.SERVICE_ACCOUNT_NAME}" //OPTIONAL
      namespace = "{var.NAMESPACE}" //OPTIONAL
      tags = "{var.TAGS}"
    }
    terraform init
    terraform apply
    Pinned Version (Project)
    provider "google" {
      project     = "{var.PROJECT_ID}"
      region      = "{var.REGION}"
    }
    
    data "google_client_config" "provider" {}
    
    data "google_container_cluster" "gke" {
      name     = "{var.NAME}"
      location = "{var.LOCATION}"
    }
    
    provider "helm" {
      kubernetes{
        host  = "https://${data.google_container_cluster.gke.endpoint}"
        token = data.google_client_config.provider.access_token
        cluster_ca_certificate = base64decode(
          data.google_container_cluster.gke.master_auth[0].cluster_ca_certificate,)
        exec {
          api_version = "client.authentication.k8s.io/v1beta1"
          command     = "gke-gcloud-auth-plugin"
        }
      }
    }
    
    module "apono-connector" {
      source = "github.com/apono-io/terraform-modules//gcp/organization-wide-connector/gke/stacks/apono-connector?ref=v1.0.0"
      connectorId = "{var.CONNECTOR_ID}" //OPTIONAL
      aponoToken = "{var.APONO_TOKEN}"
      projectId = "{var.PROJECT_ID}"
      serviceAccountName = "{var.SERVICE_ACCOUNT_NAME}" //OPTIONAL
      namespace = "{var.NAMESPACE}" //OPTIONAL
      tags = "{var.TAGS}"
    }
    Pinned Version (Organization)
    provider "google" {
      project     = "{var.PROJECT_ID}"
      region      = "{var.REGION}"
    }
    
    data "google_client_config" "provider" {}
    
    data "google_container_cluster" "gke" {
      name     = "{var.NAME}"
      location = "{var.LOCATION}"
    }
    
    provider "helm" {
      kubernetes{
        host  = "https://${data.google_container_cluster.gke.endpoint}"
        token = data.google_client_config.provider.access_token
        cluster_ca_certificate = base64decode(
          data.google_container_cluster.gke.master_auth[0].cluster_ca_certificate,)
        exec {
          api_version = "client.authentication.k8s.io/v1beta1"
          command     = "gke-gcloud-auth-plugin"
        }
      }
    }
    
    module "apono-connector" {
      source = "github.com/apono-io/terraform-modules//gcp/organization-wide-connector/gke/stacks/apono-connector?ref=v1.0.0"
      connectorId = "{var.CONNECTOR_ID}" //OPTIONAL
      aponoToken = "{var.APONO_TOKEN}"
      projectId = "{var.PROJECT_ID}"
      organizationId = "{var.ORGANIZATION_ID}"
      serviceAccountName = "{var.SERVICE_ACCOUNT_NAME}" //OPTIONAL
      namespace = "{var.NAMESPACE}" //OPTIONAL
      tags = "{var.TAGS}"
    }
    export GCP_PROJECT_ID=<GOOGLE_PROJECT_ID>
    export SERVICE_ACCOUNT_NAME=<SERVICE_ACCOUNT_NAME>
    export GCP_ARTIFACT_REPOSITORY_NAME=<ARTIFACT_REPOSITORY_NAME>
    export GCP_CLOUDRUN_SERVICE_NAME=<CLOUDRUN_SERVICE_NAME>
    export GCP_LOCATION=<GCP_LOCATION>
    export APONO_TOKEN=<YOUR_APONO_TOKEN>
    export APONO_CONNECTOR_ID=<A_UNIQUE_CONNECTOR_NAME>
    gcloud auth login 
    gcloud services enable cloudresourcemanager.googleapis.com --project $GCP_PROJECT_ID
    gcloud services enable cloudasset.googleapis.com --project $GCP_PROJECT_ID
    gcloud services enable cloudidentity.googleapis.com --project $GCP_PROJECT_ID
    gcloud services enable admin.googleapis.com --project $GCP_PROJECT_ID
    gcloud iam service-accounts create $SERVICE_ACCOUNT_NAME --project $GCP_PROJECT_ID
    gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
           --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
        --role="roles/secretmanager.secretAccessor" \
        --project $GCP_PROJECT_ID
    
    gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
        --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
        --role="roles/iam.securityAdmin" \
        --project $GCP_PROJECT_ID
    gcloud alpha auth login
    gcloud services enable cloudresourcemanager.googleapis.com
    gcloud services enable cloudasset.googleapis.com
    gcloud services enable cloudidentity.googleapis.com
    gcloud services enable admin.googleapis.com
    export GCP_ORGANIZATION_ID=<GOOGLE_ORGANIZATION_ID>
    export GCP_PROJECT_ID=<GOOGLE_PROJECT_ID>
    export SERVICE_ACCOUNT_NAME=<SERVICE_ACCOUNT_NAME>
    export GCP_ARTIFACT_REPOSITORY_NAME=<ARTIFACT_REPOSITORY_NAME>
    export GCP_CLOUDRUN_SERVICE_NAME=<CLOUDRUN_SERVICE_NAME>
    export GCP_LOCATION=<GCP_LOCATION>
    export APONO_TOKEN=<YOUR_APONO_TOKEN>
    export APONO_CONNECTOR_ID=<A_UNIQUE_CONNECTOR_NAME>
    gcloud iam service-accounts create $SERVICE_ACCOUNT_NAME --project $GCP_PROJECT_ID
    gcloud organizations add-iam-policy-binding $GCP_ORGANIZATION_ID \
        --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
        --role="roles/secretmanager.secretAccessor"
    
    gcloud organizations add-iam-policy-binding $GCP_ORGANIZATION_ID \
        --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
        --role="roles/iam.securityAdmin"
    
    gcloud organizations add-iam-policy-binding $GCP_ORGANIZATION_ID \
        --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
        --role="roles/browser"
    gcloud artifacts repositories create $GCP_ARTIFACT_REPOSITORY_NAME --repository-format=docker \
        --location=$GCP_LOCATION --description="Docker repository" \
        --project=$GCP_PROJECT_ID
    
    docker login registry.apono.io -u apono --password $APONO_TOKEN 
    
    docker pull --platform linux/amd64 registry.apono.io/apono-connector:v1.7.6
    
    export IMAGE_PATH=$GCP_LOCATION-docker.pkg.dev/$GCP_PROJECT_ID/$GCP_ARTIFACT_REPOSITORY_NAME/registry.apono.io/apono-connector:v1.7.6
    
    echo $IMAGE_PATH
    
    docker image tag registry.apono.io/apono-connector:v1.7.6 $IMAGE_PATH
    
    gcloud auth configure-docker \
        $GCP_LOCATION-docker.pkg.dev
    
    docker push $IMAGE_PATH
    docker login registry.apono.io -u apono --password $APONO_TOKEN 
    
    docker pull --platform linux/amd64 registry.apono.io/apono-connector:v1.7.6
    
    export IMAGE_PATH=$GCP_LOCATION-docker.pkg.dev/$GCP_PROJECT_ID/$GCP_ARTIFACT_REPOSITORY_NAME/registry.apono.io/apono-connector
    
    echo $IMAGE_PATH
    
    docker image tag registry.apono.io/apono-connector $IMAGE_PATH
    
    gcloud auth configure-docker \
        $GCP_LOCATION-docker.pkg.dev
    
    docker push $IMAGE_PATH
    gcloud run deploy $GCP_CLOUDRUN_SERVICE_NAME --image $IMAGE_PATH --region=$GCP_LOCATION  --allow-unauthenticated --max-instances=1 --min-instances=1 --cpu=1 --memory=2Gi --no-cpu-throttling --service-account $SERVICE_ACCOUNT_NAME --update-env-vars APONO_CONNECTOR_ID=$APONO_CONNECTOR_ID,APONO_TOKEN=$APONO_TOKEN,APONO_URL=api.apono.io
    "key": "base64_private_key"
    "value": "<SSH_SERVER_PRIVATE_KEY>"
    cat /PATH-TO-KEY/key.pem | base64
    apono ALL=(ALL) NOPASSWD:ALL
    {
      "cluster": [ "monitor", "manage_security" ],
      "indices": [
        {
          "names": [ "*" ],
          "privileges": [ "monitor" ]
        }
      ]
    }

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

    Hostname

    Hostname of the MySQL instance to connect

    Port

    Port value for the database By default, Apono sets this value to 3306.

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the Credentials Rotation Policy.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    connection

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

    Kubernetes

    Minimum Required Version: 1.3.0 Learn how to update an existing AWS, Azure, GCP, or Kubernetes connector.

    Hostname

    Hostname of the MySQL database to connect

    Port

    Port value for the instance Default Value: 3306.

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the Credentials Rotation Policy.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    AWS
    Azure
    GCP

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

    Kubernetes

    Hostname

    Hostname of the Microsoft SQL Server instance to connect

    Port

    Port value for the instance By default, Apono sets this value to 1433.

    Database Name

    Name of the database By default, Apono sets this value to master.

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the Credentials Rotation Policy.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    secret or credentials
    Integration Config Metadataarrow-up-right
    AWS
    Azure
    GCP

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

    Kubernetes

    Hostname

    Address of the MongoDB instance

    Port

    Network port the MongoDB instance is listening on for connections

    By default, MongoDB uses port 27017.

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the Credentials Rotation Policy.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    AWS
    Azure
    GCP
    connection stringarrow-up-right

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

    Kubernetes

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the Credentials Rotation Policy.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    Enable the Redis Cloud APIarrow-up-right
    Account Keyarrow-up-right
    User Keyarrow-up-right
    Create your secret
    security
    AWS
    Azure
    GCP

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Hostname of the PostgreSQL instance to connect

    Port

    Port value for the database By default, Apono sets this value to 5432.

    Database Name

    Name of the database to integrate By default, Apono sets this value to postgre.

    SSL Mode

    (Optional) Mode of Secure Sockets Layer (SSL) encryption used to secure the connection with the SQL database server

    • require: An SSL-encrypted connection must be used.

    • allow: An SSL-encrypted or unencrypted connection is used. If an SSL encrypted connection is unavailable, the unencrypted connection is used.

  • Click Next. The Secret Store section expands.

  • Associate the secret or credentials.

  • Click Next. The Get more with Apono section expands.

  • Define the Get more with Apono settings.

    Setting
    Description

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

  • Click Confirm.

  • Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    Apono Connector

    On-prem connection serving as a bridge between an Azure MySQL database instance and Apono Minimum Required Version: 1.3.0

    PostgreSQL Info

    Information for the database instance to be integrated:

    • Hostname

    • Port Number

    • Database Name

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    create a secret
    Catalogarrow-up-right
    creating an Azure connector
    associate the secret
    create access flows
    Azure PostgreSQL

    Hostname

    Hostname of the Vertica database instance to connect

    Port

    Port value for the instance By default, Apono sets this value to 5433.

    Database Name

    Name of the database

  • Click Next. The Secret Store section expands.

  • Associate the secret or credentials.

  • Click Next. The Get more with Apono section expands.

  • Define the Get more with Apono settings.

    Setting
    Description

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

  • Click Confirm.

  • Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    Apono Connector

    On-prem connection serving as a bridge between a Vertica database instance and Apono:

    • AWS

    • Azure

    • GCP

    Vertica Information

    Information for the database instance to be integrated:

    • Hostname

    • Port number

    • Database name

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    create a secret
    security
    integrate your Vertica database
    Catalogarrow-up-right
    AWS
    Azure
    GCP
    Kubernetes
    create access flows
    Vertica Database tile

    Hostname

    EC2 Connect

  • EC2 Manage

  • S3 Bucket (by "any resource" and region tags)

  • SNS Topic

  • SQS queue

  • Is individual resource selection truly necessary for security requirements?

    Apono Connector

    On-prem connection serving as a bridge between an AWS instance and Apono

    Minimum Required Version: 1.7.0

    Use the following steps to update an existing connector.

    Access Scopes

    (Strongly Recommended, All Access Flows) Use when you need dynamic, rule-based resource grouping

    Access scopes and AQL let you create flexible filters that adapt to your changing infrastructure. This makes them ideal for scenarios like all production databases or EC2 instances in the eu-region.

    Integrations

    (Automatic Access Flow) Use when providing access to an entire AWS account or organization, or to resources that share specific tags

    Integrations let you align permissions with your organization structure:

    • Use tags in your cloud environment to group resources, such as production, eu-region, customer-support.

    • Apply Any resources when all resources of the integration can be included.

    This strategy is ideal for scenarios like managing cross-account DevOps access or regional support team permissions.

    Bundles

    (Automatic Access Flow, Self Serve Access Flow) Use when packaging related resources as a cohesive unit for user requests

    Bundles let you create logical groupings of permissions that serve specific functions.

    When creating a bundle explore one of the following options:

    • Use tags in your cloud environment to group resources, such as production, eu-region, customer-support.

    • Apply Any resources when all resources of the integration can be included.

    This strategy is ideal for scenarios like complete development environment access or full analytics platform access.

    Automatic

    • You have selected more than 100 AWS resources by name (Select by name) from one integration or between multiple integrations.

    • You have selected more than 100 AWS resources by name (Select by name) within one bundle or between multiple bundles.

    Self Serve

    • You have selected more than 100 AWS resources within one bundle or between multiple bundles.

    access scope
    Warning message
    Warning message

    From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    Hostname of the PostgreSQL database instance to connect

    Port

    Port value for the instance

    By default, Apono sets this value to 5432.

    Database Name

    Name of the database to integrate

    By default, Apono sets this value to postgre.

    SSL Mode

    (Optional) Mode of Secure Sockets Layer (SSL) encryption used to secure the connection with the SQL database server

    • require: An SSL-encrypted connection must be used.

    • allow: An SSL-encrypted or unencrypted connection is used. If an SSL encrypted connection is unavailable, the unencrypted connection is used.

  • Click Next. The Secret Store section expands.

  • Associate the secret or credentials.

  • Click Next. The Get more with Apono section expands.

  • Define the Get more with Apono settings.

    Setting
    Description

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

  • Click Confirm.

  • Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    Apono Connector

    On-prem connection serving as a bridge between your PostgreSQL databases and Apono:

    • AWS

    • Azure

    • GCP

    Minimum Required Version: 1.3.0 Use the following steps to update an existing connector:

    PostgreSQL Info

    Information for the database instance to be integrated:

    • Hostname

    • Port number

    • Database Name

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    Azure PostgreSQL
    CloudSQL PostgreSQL
    RDS PostgreSQL
    create a secret
    integration process
    Catalogarrow-up-right
    AWS
    Azure
    GCP
    Kubernetes
    create access flows
    PostgreSQL tile

    Hostname

    Accounts Management URL

    Example: https://aacounts.cloud.databricks.com

    Account Id

    Unique identifier for the Databricks account

  • Click Next. The Secret Store section expands.

  • Associate the secret or credentials.

  • Integration Owner

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

  • Click Confirm.

  • Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    Apono connector

    On-prem connection serving as a bridge between a Databricks instance and Apono:

    • AWS

    • Azure

    • GCP

    Learn how to update an existing , , , or connector.

    Databricks account management URL

    Accounts Management URL Example: https://aacounts.cloud.databricks.com

    Databricks account ID

    Unique identifier for the Databricks account Follow these steps:

    1. In your account management console, click your profile icon.

    2. Copy the Account ID under your email.

    Service principal

    Account for the Apono integration with admin privileges Follow these steps:

    1. In your account management console, click your workspace > Manage account. A new page opens.

    2. From the side navigation, click User management. The User management page opens.

    3. On the Service principals tab, click Add service principal. The Add service principal popup window appears.

    4. Enter the New service principal display name.

    5. Click Add service principal. The principal is created and added to the list of principals.

    6. Click the name of the principal.

    7. On the Roles tab, click the Account Admin toggle to ON.

    8. Grant principal access:

      1. On the Permissions tab, click Grant accesss. The Grant access to others pop-up window appears.

      2. From the User, Group or Service Principal dropdown menu, select the principal.

    Databricks credentials

    Client ID and secret used to securely authenticate the service principal Follow these steps:

    1. On the Credentials & secrets tab of the service principal, click Generate secret. The Generate OAuth secret popup window opens.

    2. Enter the Lifetime (days) duration of the secret.

    3. Click Generate. The Generate OAuth secret popup window is replaced by the Generate secret popup window.

    4. Copy the Secret and Client ID.

    based on your secret and client ID key:

    "client_id": "<DATABRICKS_CLIENT_ID>",

    "client_secret": "<DATABRICKS_SECRET>"

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Catalogarrow-up-right
    AWS
    Azure
    GCP
    Kubernetes
    create access flows
    Databricks tile

    Databricks Accounts URL

    GitHub organization name

  • Click Next. The Secret Store section expands.

  • Associate the secret or credentials.

  • Click Next. The Get more with Apono section expands.

  • Define the Get more with Apono settings.

    Setting
    Description

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

  • Click Confirm.

  • Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    Apono Connector

    On-prem connection serving as a bridge between a GitHub database instance and Apono:

    • AWS

    • Azure

    • GCP

    Minimum Required Version: 1.3.2

    GitHub Organization Account

    GitHub organization account that possesses admin repository and user permissions

    Company Email of User

    (Non-Enterprise subscription) Company email associated with the user's GitHub profile

    For non-Enterprise organizations, set the user email to public in GitHub.

    If the email is private, Apono will not be able to locate the user.

    Synced IdP

    (Enterprise subscription) Identity provider (IdP) connected with your GitHub account

    For Enterprise organizations, sync GitHub with your IdParrow-up-right.

    GitHub Token

    GitHub authentication tokenarrow-up-right Under Select scopes, click the checkboxes next to the following parent scopes. By selecting each parent scope, all the children scopes will also be selected:

    • repo

    • admin:org

    • user

    Apono Secret

    Value generated in one of the following environments

    Create a secret for the GitHub instance. For the key, use token. For the value, use the generated GitHub token. "token": "<GITHUB_ACCESS_TOKEN>"

    Apono does not store credentials. The Apono Connector uses the secret to communicate with services in your environment and separate the Apono web app from the environment for maximal security.

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    Catalogarrow-up-right
    AWS
    Azure
    GCP
    Kubernetes
    create access flows

    Organization

    Hostname of the MariaDB instance to connect

    Port

    Port value for the instance By default, Apono sets this value to 3306.

  • Click Next. The Secret Store section expands.

  • Associate the secret or credentials.

  • Click Next. The Get more with Apono section expands.

  • Define the Get more with Apono settings.

    Setting
    Description

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

  • Click Confirm.

  • Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    Apono Connector

    On-prem connection serving as a bridge between a MariaDB instance and Apono:

    • AWS

    • Azure

    • GCP

    Minimum Required Version: 1.3.0 Learn how to update an existing , , , or connector.

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify the integration when constructing an access flow

    Create a secret
    integrate your MariaDB database
    Catalogarrow-up-right
    AWS
    Azure
    GCP
    Kubernetes
    create access flows
    MariaDB tile

    Hostname

    Apono Token

    Account-specific Apono authentication value

    Use the following steps to obtain your token:

    1. On the page, click Install Connector. The Install Connector page appears.

    2. Click Cloud installation > Azure > Install and Connect Azure Account > CLI (Container Instance).

    PowerShell

    that enables interacting with Azure services using your command-line shell

    Azure Cloud Information

    Information for your Azure Cloud instance:

    Owner Role (Azure RBAC)

    with the following permissions:

    • Grants full access to manage all resources

    • Assigns roles in Azure RBAC

    Global Administrator

    with the following permission:

    • Manages all aspects of Microsoft Entra ID and Microsoft services that use Microsoft Entra identities

    ❗Apono does not require Global Administrator access. This is required for the admin following this guide. ❗


    hashtag
    Install a new connector

    You can install a connector for an Azure Management Group or Subscription.

    circle-info

    The connector requires the following roles:

    1. Directory Readers - to validate users in Azure

    2. User Access Administrator - to provision and deprovision access in the Management Group

    Read more about these Microsoft Entra ID roles .

    Follow these steps to install a new connector:

    1. At the shell prompt, set the environment variables.

    $env:APONO_CONNECTOR_ID = "<A_UNIQUE_CONNECTOR_NAME>"
    $env:APONO_TOKEN 
    
    1. Log in to your Azure account.

    Connect-AzAccount
    1. Set the REGION environment variable.

    1. Run the following command to deploy the connector on your ACI.

    1. Add the User Access Administrator role to the connector in the management group scope.

    1. If your Azure resources have applied, assign the Tag Contributor role to the connector at the management scope. This allows Apono to add a tag marker during the grant or revoke process.

    1. For Azure AD, add the Directory Readers role to the connector. For Azure AD Groups, add the Groups Administrator and Privileged Role Administrator roles.

    1. On the page, verify that the connector has been updated.

    You can now integrate with an .

    Follow these steps to install a new connector:

    1. At the shell prompt, set the environment variables.

    1. Log in to your Azure account.

    Prerequisites
    Item
    Description

    Apono Connector

    On-prem connection serving as a bridge between a MongoDB Atlas instance and Apono:

    Atlas CLI

    used to manage Atlas resources

    MongoDB Atlas Information

    Information for the database instance to be integrated:


    hashtag
    Create a project owner API key

    A project owner API key enables Apono to control Atlas user access across a single or multiple projects.

    If you have a single MongoDB Atlas project, you can use a project owner API key to manage it through Apono.

    Follow these steps to create a project owner API key:

    1. At the Atlas CLI prompt, run the following command. Be sure to replace #PROJECT_ID with the project ID that contains the cluster you want to integrate.

    atlas projects apiKeys create --desc cli-created --projectId 
    1. Copy the public and private API key in the response.

    2. with the credentials from step 2. Use the following key-value pair structure when generating the secret. Be sure to replace #PUBLIC_KEY and #PRIVATE_KEY with actual values.

    circle-check

    You can also input the user credentials directly into the Apono UI during the .

    You can now .

    If you have multiple MongoDB Atlas projects, you can use a single project owner API key to manage them all through Apono.

    Follow these steps to create and associate a project owner API key:

    1. At the Atlas CLI prompt, run the following command. Be sure to replace #PROJECT_ID with the project ID that contains the cluster you want to integrate.

    1. Copy the public and private API key in the response.


    hashtag
    Integrate MongoDB Atlas

    MongoDB Atlas tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click MongoDB Atlas. The Connect Integration page appears.

    2. Under Discovery, select one or multiple resource types.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWS, Azure, GCP, Kubernetes).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. .

    5. Click Next. The Get more with Apono section expands.

    6. Define the Get more with Apono settings.

      Setting
      Description
    7. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your MongoDB Atlas instance.

    hashtag
    Limitations

    Please note: due to Mongo Atlas limitationsarrow-up-right, only 100 custom roles can be created per tenant. This may cause access requests to fail if the limit is exceeded.

    To manage access to a GCP Organization, install a connector in a GKE cluster on any project and give the connector the appropriate role to the organization. Follow this guide.
    circle-info

    What's a connector? What makes it so secure?

    The Apono Connector is an on-prem connection that can be used to connect resources to Apono and separate the Apono web app from the environment for maximal security.

    Read more about the recommended GCP Installation Architecture.

    hashtag
    How to install

    hashtag
    GCP Organization Connector

    hashtag
    Using Helm

    Prerequisites

    • A GCP user with owner permissions for the organizationarrow-up-right

    • A GKE cluster on any GCP Project of your choosing

    • Google CLIarrow-up-right

    • (kubectl)

    • The Apono GCP token generated in the Apono UI:

    • Organization IDarrow-up-right

    • Project IDarrow-up-right

    • Make sure Cloud Asset API is turned onarrow-up-right in the Project where the connector is installed.

    circle-info

    Learn more about the Cloud Asset APIarrow-up-right.

    Step-by-step guide

    1. Prepare parameters for Apono installation

    Fill and set the values for the following variables:

    Set the connector service account variable:

    1. Make sure Cloud Resource Manager API is enabled

    1. Create IAM Service Account and grant it the roles: Browser, Security Admin and Tag Viewer for the entire organization.

    1. Verifying default GKE cluster for installation

    • Open the Kubernetes command-line tool

    • Run kubectl config get-contexts to see the GKE clusters list

    • Set the desired cluster to be the default - kubectl config use-context #the name of the cluster

    • Run kubectl get-contexts - verify the "*" indicates the correct cluster.

    1. Bind the IAM Service Account to the K8S Service Account

    1. Install Helm Chart

    The helm chart installs the following:

    • Kubernetes Deployment containing the Apono-Connector image container

    • Kubernetes Service Account annotated with GCP IAM Service Account

    • Kubernetes Secret containing Docker Registry credentials

    circle-info

    Interested in HA for the connector?

    Add this variable to the Helm chart to create one or more replicas of the Apono connector instance:

    --set-string replicaCount=<number_of_replicas>

    Read more here.

    hashtag
    GCP Project Connector

    hashtag
    Using Helm

    Prerequisites

    • A GCP user with owner permissions for the organizationarrow-up-right

    • A GKE cluster on the GCP Project you'd like to integrate with Apono

    • Google CLIarrow-up-right

    • (kubectl)

    • The Apono GCP token generated in the Apono UI:

    • Project IDarrow-up-right

    • Make sure Cloud Asset API is turned onarrow-up-right in the Project where the connector is installed.

    circle-info

    Learn more about the Cloud Asset APIarrow-up-right.

    Step-by-step guide

    1. Prepare parameters for Apono installation

    Fill and set the values for the following variables:

    Set the following variable:

    1. Enable Cloud Resource Manager API

    1. Create IAM Service Account and grant it with the roles: Browser, Security Admin and Tag Viewer for the project.

    1. Verifying default GKE cluster for installation

    • Open the Kubernetes command-line tool

    • Run kubectl config get-contexts to see the GKE clusters list

    • Set the desired cluster to be the default - kubectl config use-context #the name of the cluster

    • Run kubectl get-contexts - verify the "*" indicates the correct cluster.

    1. Bind the IAM Service Account to the K8S Service Account

    1. Install Helm Chart

    The helm chart installs the following:

    • Kubernetes Deployment containing the Apono-Connector image container

    • Kubernetes Service Account annotated with GCP IAM Service Account

    • Kubernetes Secret containing Docker Registry credentials

    circle-check

    Interested in HA for the connector?

    Add this variable to the Helm chart to create one or more replicas of the Apono connector instance:

    --set-string replicaCount=<number_of_replicas>

    Read more .

    hashtag
    Results

    You can validate the Connector is installed in the Connector status pagearrow-up-right.

    Then, In the Apono app, you will see the connector was found and a green checkmark indication.

    circle-check

    Hurray!

    You now have a GCP connector installed in your GCP environment with permissions to the Project.

    You can now integrate Apono with a GCP Project or GCP Organization.

    integrate with GCP
    this guide
    Item
    Description

    Apono Connector

    On-prem serving as a bridge between your Google Cloud SQL MySQL databases and Apono Minimum Required Version: 1.4.1 Use the following steps to .

    Cloud SQL Admin API

    for managing database instances with resources, such as BackupRuns, Databases, and Instances

    Cloud SQL Admin Role

    (Cloud IAM authentication only) Google Cloud role that the Apono connector's service user must have at the instance's project or organization level


    hashtag
    Create a MySQL user

    You must create a user in your MySQL instance for the Apono connector and grant that user permissions to your databases.

    Follow these steps to create a user and grant it permissions:

    1. In the Google Cloud console, create a new userarrow-up-right with either Built-in authentication or Cloud IAM authentication.

    Use apono_connector for the username.

    Be sure to set a strong password for the user.

    circle-check

    As an alternative, you can run the following common from your MySQL client:

    CREATE USER 'apono_connector'@'%' IDENTIFIED BY 'password';

    Use apono-connector-iam-sa@[PROJECT_ID].iam.gserviceaccount.com for the Principal.

    circle-exclamation

    Be sure that the Apono connector GCP service account (apono-connector-iam-sa@[PROJECT_ID].iam.gserviceaccount.com) has the Cloud SQL Admin role.

    1. In your preferred client tool, expose databases to the user. This allows Apono to view database names without accessing the contents of each database.

    1. Grant the user database permissions. The following commands grant Apono the following permissions:

      • Creating users

      • Updating user information and privileges

      • Monitoring and troubleshooting processes running on the database

    2. Grant the user only one of the following sets of permissions. The chosen set defines the highest level of permissions to provision with Apono. Click on each tab to reveal the SQL commands.

    Allows Apono to read data from databases

    Allows Apono to read and modify data

    Allows Apono administrative-level access, including the ability to execute and drop tables

    1. (MySQL 8.0+) Grant the user the authority to manage other roles. This enables Apono to create, alter, and drop roles. However, this role does not inherently grant specific database access permissions.

    1. Create a secret with the credentials from step 1 above.

    You can now integrate Google Cloud SQL - MySQL.


    hashtag
    Integrate Google Cloud SQL - MySQL

    Google Cloud SQL - MySQL
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click Google Cloud SQL - MySQL. The Connect Integration page appears.

    2. Under Discovery, click one or more resource types and cloud services to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a GCP connector.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. (User/Password only) .

    circle-info

    A secret is not needed or Cloud IAM authentication.

    1. Click Next. The Get more with Apono section expands.

    2. Define the Get more with Apono settings.

      Setting
      Description

      Credential Rotation

      (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    3. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flow that grant permission to your Google Cloud SQL MySQL database.

    CloudSQL - PostgreSQL

    Create an integration to manage access to PostgreSQL instances on Google Cloud SQL

    Google Cloud SQL PostgreSQL is a fully managed relational database service built for the cloud. It provides a high-performance, scalable, and highly available PostgreSQL database instance without the overhead of managing infrastructure. With Google Cloud SQL, users benefit from Google Cloud's robust infrastructure, which ensures high availability, security, and scalability for their databases.

    Through this integration, Apono helps you securely manage access to your Google Cloud SQL PostgreSQL database instances.

    To enable Apono to manage Google Cloud SQL PostgreSQL user access, you must create a user and then configure the integration within the Apono UI.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Create a PostgreSQL user

    You must create a user in your PostgreSQL instance for the Apono connector and grant that user permissions to your databases.

    triangle-exclamation

    You must use the admin account and password to connect to your database.

    Following these steps to create a user and grant it permissions:

    1. In the Google Cloud console, with either Built-in authentication or Cloud IAM authentication.

    Use apono_connector for the username.

    This authentication method grants the user the cloudsqlsuperuser role. Be sure to set a strong password for the user.

    circle-check

    As an alternative, you can run the following command from your Postgre client:

    CREATE USER 'apono_connector'@'%' IDENTIFIED BY 'password'

    1. (Cloud IAM only) In your preferred client tool, grant cloudsqlsuperuser access to the user account.

    1. In your preferred client tool, grant the cloudsqlsuperuser role privileges on all databases except template0 and cloudsqladmin. This allows Apono to perform tasks that are not restricted to a single schema or object within the database, such as creating, altering, and dropping database objects.

    1. For each database to be managed through Apono, connect to the database and grant cloudsqlsuperuser privileges on all objects in the schemas. This allows Apono to perform tasks that are restricted to schemas within the database, such as modifying table structures, creating new sequences, or altering functions.

    1. Connect to the template1 database and grant cloudsqlsuperuser privileges on all objects in the schemas. For any new databases created in the future, this allows Apono to perform tasks that are restricted to schemas within the database, such as modifying table structures, creating new sequences, or altering functions.

    1. (Built-in authentication only) with the credentials from step 1.

    circle-info

    When using Cloud IAM authentication, the service account and its permissions are managed through Google Cloud IAM roles and policies. The service account is used to authenticate to the Cloud SQL instance.

    A secret does not need to be created.


    hashtag
    Integrate Google Cloud SQL - PostgreSQL

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click Google Cloud SQL - PostgreSQL. The Connect Integration page appears.

    2. Under Discovery, click one or more resource types and cloud services to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description
    circle-info

    A secret is not needed or Cloud IAM authentication.

    1. Click Next. The Get more with Apono section expands.

    2. Define the Get more with Apono settings.

      Setting
      Description
    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    Now that you have completed this integration, you can that grant permission to your Google Cloud SQL PostgreSQL instance.

    Oracle

    Create an integration to manage access to Oracle Database tables and custom roles

    Oracle Database is a relational database management system (RDBMS) developed by Oracle Corporation. It enables organizations to store, manage, and retrieve data using Structured Query Language (SQL). The database includes features for ensuring data integrity, performing backup and recovery, controlling access, and tuning performance.

    Oracle Database supports both on-premises and cloud-based deployments through Oracle Cloud Infrastructure.

    Through this integration, Apono helps you securely manage just-in-time, just-enough access to your Oracle Database, tables and custom roles.


    hashtag
    Prerequisites


    hashtag
    Create an Oracle Database user

    You must create a user in your Oracle Database instance for the Apono connector.

    Use the following steps to create a user and grant it permissions to your databases:

    1. In your preferred client tool, create a new user. Be sure to set a strong password for the user.

    circle-exclamation

    The password must be a minimum of 9 characters and satisfy the following minimum requirements:

    • 2 lowercase letters

    1. Grant the user permission to connect to the Oracle Database.

    1. Grant user management permissions.

    1. Grant role management permissions.

    1. Grant table management permissions.

    1. Grant the user permissions to grant permissions to Oracle users.

    1. Using the credentials from step 1, for the database instance.

    circle-check

    You can also input the user credentials directly into the Apono UI during the .

    You can now .


    hashtag
    Integrate Oracle Database

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click Oracle Database. The Connect Integration page appears.

    2. Under Discovery, click one or more resource types to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (, , , ).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description
    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    Now that you have completed this integration, you can that grant permission to your Oracle Database resources.

    RDP Servers

    Create an integration to manage access to an RDP server

    Microsoft Remote Desktop Protocol (RDP) enables users to connect to and control a remote computer or virtual machine over a network. It provides secure and efficient remote access to desktops, servers, and applications, allowing employees to work from anywhere with an internet connection.

    With this integration, Apono enables you to manage access to an RDP server with Connect permission or custom permissions group, so that only specific users or groups can provide remote access to resources in your environment on a temporary, as-needed basis.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Configure the RDP server

    Before you begin integrating RDP with Apono, you must configure the Windows Remote Management (WinRM) service on a Windows machine to allow remote access using unencrypted and basic authentication.

    You can allow or communication.

    Follow these steps to configure the RDP server:

    1. Add the WinRM port 5985 to the allowlist in the server firewall.

    2. Turn on the WinRM firewall rule in the Windows server.

    3. Analyze and configure the WinRM service to allow remote management on the local machine.


    hashtag
    Integrate an RDP server

    circle-exclamation

    WinRM HTTPS requires a local computer Server Authentication certificate with a CN matching the hostname to be installed. The certificate must not be expired, revoked, or self-signed.

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click RDP The Connect Integration page appears.

    2. Under Discovery, select one or multiple resource types for Apono to discover in all instances of the environment.

    3. Click Next. The Apono connector section expands.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (, , , ).

    1. Click Next. The Integration Config page appears.

    2. Define the Integration Config settings.

      Setting
      Description
    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    Now that you have completed this integration, you can that grant permission to your RDP server.

    RDS PostgreSQL

    Integrate with AWS-managed PostgreSQL for JIT access management for RDS

    PostgreSQL databases are open-source relational database management systems emphasizing extensibility and SQL compliance. AWS enables developers to create cloud-hosted PostgreSQL databases.

    Through this integration, Apono helps you securely manage access to your AWS RDS for PostgreSQL instances.


    hashtag
    Prerequisites

    Item

    Integrate a GCP organization or project

    Create an integration to manage access to a GCP organization or project resources

    Apono offers GCP users a simple way to centralize cloud management through our platform. Through a single integration, you can manage multiple GCP services across various organizations and projects.


    hashtag
    Prerequisites

    Item
    Description

    Windows Domain Controller

    Create an integration to manage access to a Windows Domain Controller

    A Windows Domain Controller (DC) authenticates and authorizes users, enforcing security policies for computers within the domain. Through centralized user management and access control, the DC ensures that users can log into computers and access resources like applications and files based on their permissions.

    With this integration, Apono enables you to manage access to a Windows Domain Controller with Connect permission or a custom permissions group, so that only specific users or groups can provide remote access to resources in your environment on a temporary, as-needed basis.​


    hashtag
    Prerequisites

    GRANT CREATE USER ON *.* TO 'apono_connector'@'%';
    GRANT UPDATE ON mysql.* TO 'apono_connector'@'%';
    GRANT PROCESS ON *.* TO 'apono_connector'@'%';
    CREATE USER apono_connector WITH ENCRYPTED PASSWORD 'password';
    ALTER USER apono_connector WITH CREATEROLE;
    GRANT azure_pg_admin TO apono_connector;
    DO $$
    DECLARE
      database_name text;
    BEGIN
      FOR database_name IN (SELECT datname FROM pg_database WHERE datname != 'template0' AND datname != 'azure_sys' AND datname != 'azure_maintenance') LOOP
        EXECUTE 'GRANT ALL PRIVILEGES ON DATABASE ' || quote_ident(database_name) || ' TO azure_pg_admin WITH GRANT OPTION';
      END LOOP;
    END; $$
    DO $$
    DECLARE
      schema text;
    BEGIN
      FOR schema IN (SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT LIKE 'pg_%' AND schema_name != 'information_schema' AND schema_name != 'cron') LOOP
        EXECUTE 'GRANT ALL PRIVILEGES ON SCHEMA ' || quote_ident(schema) || ' TO azure_pg_admin WITH GRANT OPTION';
        EXECUTE 'GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA ' || quote_ident(schema) || ' TO azure_pg_admin WITH GRANT OPTION';
        EXECUTE 'GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA ' || quote_ident(schema) || ' TO azure_pg_admin WITH GRANT OPTION';
        EXECUTE 'GRANT ALL PRIVILEGES ON ALL FUNCTIONS IN SCHEMA ' || quote_ident(schema) || ' TO azure_pg_admin WITH GRANT OPTION';
      END LOOP;
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON TABLES TO azure_pg_admin WITH GRANT OPTION';
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON SEQUENCES TO azure_pg_admin WITH GRANT OPTION';
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON FUNCTIONS TO azure_pg_admin WITH GRANT OPTION';
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON SCHEMAS TO azure_pg_admin WITH GRANT OPTION';
    END; $$
    DO $$
    DECLARE
      schema text;
    BEGIN
      FOR schema IN (SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT LIKE 'pg_%' AND schema_name != 'information_schema' AND schema_name != 'cron') LOOP
        EXECUTE 'GRANT ALL PRIVILEGES ON SCHEMA ' || quote_ident(schema) || ' TO azure_pg_admin WITH GRANT OPTION';
        EXECUTE 'GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA ' || quote_ident(schema) || ' TO azure_pg_admin WITH GRANT OPTION';
        EXECUTE 'GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA ' || quote_ident(schema) || ' TO azure_pg_admin WITH GRANT OPTION';
        EXECUTE 'GRANT ALL PRIVILEGES ON ALL FUNCTIONS IN SCHEMA ' || quote_ident(schema) || ' TO azure_pg_admin WITH GRANT OPTION';
      END LOOP;
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON TABLES TO azure_pg_admin WITH GRANT OPTION';
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON SEQUENCES TO azure_pg_admin WITH GRANT OPTION';
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON FUNCTIONS TO azure_pg_admin WITH GRANT OPTION';
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON SCHEMAS TO azure_pg_admin WITH GRANT OPTION';
    END; $$
    CREATE USER apono_connector IDENTIFIED BY 'password';
    GRANT pseudosuperuser TO apono_connector;
    ALTER USER apono_connector DEFAULT ROLE pseudosuperuser;
    CREATE USER apono_connector WITH ENCRYPTED PASSWORD 'password';
    ALTER USER apono_connector WITH SUPERUSER;  
    CREATE USER 'apono_connector'@'%' IDENTIFIED BY 'password';
    GRANT SHOW DATABASES ON *.* TO 'apono_connector'@'%';
    GRANT CREATE USER ON *.* TO 'apono_connector'@'%';  
    GRANT UPDATE ON mysql.* TO 'apono_connector'@'%';  
    GRANT PROCESS ON *.* TO 'apono_connector'@'%';
    GRANT RELOAD ON *.* TO 'apono_connector'@'%';
    GRANT CONNECTION ADMIN ON *.* TO 'apono_connector'@'%';
    GRANT SELECT ON *.* TO 'apono_connector'@'%';  
    GRANT GRANT OPTION ON *.* TO 'apono_connector'@'%';
    GRANT SELECT,ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,INDEX,INSERT,TRIGGER,UPDATE ON *.* TO 'apono_connector'@'%';  
    GRANT GRANT OPTION ON *.* TO 'apono_connector'@'%';
    GRANT EXECUTE,DROP,SELECT,ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,INDEX,INSERT,TRIGGER,UPDATE ON *.* TO 'apono_connector'@'%';  
    GRANT GRANT OPTION ON *.* TO 'apono_connector'@'%';
    "username": "apono-connector",
    "password": "#PASSWORD"
    # Your GCP Project ID
    export PROJECT_ID=
    # The token from your Apono Account
    export APONO_TOKEN=
    # Your Organization Id (gcloud projects get-ancestors $PROJECT_ID)
    export ORGANIZATION_ID=
    # The connector identifier
    export APONO_CONNECTOR_ID=apono-google-integration
    # The namespace to deploy the cluster on
    export NAMESPACE=apono-connector-namespace
    
    echo "PROJECT_ID: $PROJECT_ID"
    echo "APONO_TOKEN: $APONO_TOKEN"
    echo "APONO_CONNECTOR_ID: $APONO_CONNECTOR_ID"
    echo "NAMESPACE: $NAMESPACE"
    echo "ORGANIZATION_ID: $ORGANIZATION_ID"
    export GCP_SERVICE_ACCOUNT_EMAIL=apono-connector-iam-sa@$PROJECT_ID.iam.gserviceaccount.com && 
    
    echo "GCP_SERVICE_ACCOUNT_EMAIL: $GCP_SERVICE_ACCOUNT_EMAIL"
    gcloud services enable cloudresourcemanager.googleapis.com  --project $PROJECT_ID
    gcloud iam service-accounts create apono-connector-iam-sa --project $PROJECT_ID
    
    gcloud organizations add-iam-policy-binding $ORGANIZATION_ID \
        --member="serviceAccount:$GCP_SERVICE_ACCOUNT_EMAIL" \
        --role="roles/browser"
    
    gcloud organizations add-iam-policy-binding $ORGANIZATION_ID \
        --member="serviceAccount:$GCP_SERVICE_ACCOUNT_EMAIL" \
        --role="roles/iam.securityAdmin"
        
    gcloud organizations add-iam-policy-binding $ORGANIZATION_ID \
        --member="serviceAccount:$GCP_SERVICE_ACCOUNT_EMAIL" \
        --role="roles/resourcemanager.tagViewer"
    gcloud iam service-accounts add-iam-policy-binding $GCP_SERVICE_ACCOUNT_EMAIL \
        --member="serviceAccount:$PROJECT_ID.svc.id.goog[$NAMESPACE/apono-connector-service-account]" \
        --role="roles/iam.workloadIdentityUser" \
        --project $PROJECT_ID
    helm install apono-connector apono-connector --repo https://apono-io.github.io/apono-helm-charts \
        --set-string apono.token=$APONO_TOKEN \
        --set-string apono.connectorId=$APONO_CONNECTOR_ID \
        --set-string serviceAccount.gcpServiceAccountEmail=$GCP_SERVICE_ACCOUNT_EMAIL \
        --namespace $NAMESPACE \
        --create-namespace
    # Your GCP Project ID
    export PROJECT_ID=
    # The token from your Apono Account
    export APONO_TOKEN=
    # The connector identifier
    export APONO_CONNECTOR_ID=apono-google-integration
    # The namespace to deploy the cluster on
    export NAMESPACE=apono-connector-namespace
    
    echo "PROJECT_ID: $PROJECT_ID"
    echo "APONO_TOKEN: $APONO_TOKEN"
    echo "APONO_CONNECTOR_ID: $APONO_CONNECTOR_ID"
    echo "NAMESPACE: $NAMESPACE"
    export GCP_SERVICE_ACCOUNT_EMAIL=apono-connector-iam-sa@$PROJECT_ID.iam.gserviceaccount.com && echo "GCP_SERVICE_ACCOUNT_EMAIL: $GCP_SERVICE_ACCOUNT_EMAIL"
    gcloud services enable cloudresourcemanager.googleapis.com  --project $PROJECT_ID
    gcloud iam service-accounts create apono-connector-iam-sa --project $PROJECT_ID
    
    gcloud projects add-iam-policy-binding $PROJECT_ID \
        --member="serviceAccount:$GCP_SERVICE_ACCOUNT_EMAIL" \
        --role="roles/browser" \
        --project $PROJECT_ID
    
    gcloud projects add-iam-policy-binding $PROJECT_ID \
        --member="serviceAccount:$GCP_SERVICE_ACCOUNT_EMAIL" \
        --role="roles/iam.securityAdmin" \
        --project $PROJECT_ID
        
    gcloud projects add-iam-policy-binding $PROJECT_ID \
        --member="serviceAccount:$GCP_SERVICE_ACCOUNT_EMAIL" \
        --role="roles/resourcemanager.tagViewer" \
        --project $PROJECT_ID
    gcloud iam service-accounts add-iam-policy-binding $GCP_SERVICE_ACCOUNT_EMAIL \
        --member="serviceAccount:$PROJECT_ID.svc.id.goog[$NAMESPACE/apono-connector-service-account]" \
        --role="roles/iam.workloadIdentityUser" \
        --project $PROJECT_ID
    helm install apono-connector apono-connector --repo https://apono-io.github.io/apono-helm-charts \
        --set-string apono.token=$APONO_TOKEN \
        --set-string apono.connectorId=$APONO_CONNECTOR_ID \
        --set-string serviceAccount.gcpServiceAccountEmail=$GCP_SERVICE_ACCOUNT_EMAIL \
        --namespace $NAMESPACE \
        --create-namespace
    GRANT SELECT ON *.* TO 'apono_connector'@'%';
    GRANT GRANT OPTION ON *.* TO 'apono_connector'@'%';
    GRANT SELECT,ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,INDEX,INSERT,TRIGGER,UPDATE,REFERENCES ON *.* TO 'apono_connector'@'%';
    GRANT GRANT OPTION ON *.* TO 'apono_connector'@'%';
    GRANT EXECUTE,DROP,SELECT,ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,INDEX,INSERT,TRIGGER,UPDATE,REFERENCES ON *.* TO 'apono_connector'@'%';
    GRANT GRANT OPTION ON *.* TO 'apono_connector'@'%';
    GRANT SHOW DATABASES ON *.* TO 'apono_connector'@'%';
    GRANT ROLE_ADMIN on *.* to 'apono_connector';
    must be defined.
    Integration Owner
    must also be defined.
    resource owner
    resource owners
    must be defined.
    Integration Owner
    must also be defined.
    resource owner
    resource owners
    must be defined.
    Integration Owner
    must also be defined.
    resource owner
    resource owners
    must be defined.
    Integration Owner
    must also be defined.
    resource owner
    resource owners
    must be defined.
    Integration Owner
    must also be defined.
    resource owner
    resource owners
    disable
    : An unencrypted connection is used.
  • prefer: An SSL-encrypted connection is attempted. If the encrypted connection is unavailable, the unencrypted connection is used.

  • verify-ca: An SSL-encrypted connection must be used and a server certification verification against the provided CA certificates must pass.

  • verify-full: An SSL-encrypted connection must be used and a server certification verification against the provided CA certificates must pass. Additionally, the server hostname is checked against the certificate's names.

  • (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Credentials Rotation Policy
    Periodic User Cleanup & Deletion

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Kubernetes
    Credentials Rotation Policy
    Periodic User Cleanup & Deletion
    Kubernetes
    disable
    : An unencrypted connection is used.
  • prefer: An SSL encrypted connection is attempted. If the encrypted connection is unavailable, the unencrypted connection is used.

  • verify-ca: An SSL-encrypted connection must be used and a server certification verification against the provided CA certificates must pass.

  • verify-full: An SSL-encrypted connection must be used and a server certification verification against the provided CA certificates must pass. Additionally, the server hostname is checked against the certificate's names.

  • (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Kubernetes
    AWS
    Azure
    GCP
    Credentials Rotation Policy
    Periodic User Cleanup & Deletion

    From the Permission dropdown menu, select Service Principal: Manager.

  • Click Save.

  • must be defined.
    Integration Owner
    must also be defined.
    Kubernetes
    AWS
    Azure
    GCP
    Kubernetes
    Create your secret
    resource owner
    resource owners

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Kubernetes
    Credentials Rotation Policy
    Periodic User Cleanup & Deletion

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Kubernetes
    AWS
    Azure
    GCP
    Kubernetes
    Credentials Rotation Policy
    Periodic User Cleanup & Deletion
    Kubernetes CLIarrow-up-right
    Kubernetes CLIarrow-up-right
    here
    Integration Config Metadataarrow-up-right

    Auth Type

    Authorization type for the MySQL service account user

    Option

    Description

    IAM Auth

    Cloud IAM authentication

    User / Password

    Built-in authentication

    Project ID

    ID of the project where the MySQL instance is deployed

    Region

    Location where the MySQL instance is deployed

    Instance ID

    ID of the MySQL instance

    Instance ID User Override

    (Optional) Allows overriding the instance ID for the user

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    connection
    update an existing connector
    APIarrow-up-right
    Credentials Rotation Policy
    GRANT CREATE USER ON *.* TO 'apono_connector'@'%';
    GRANT UPDATE ON mysql.* TO 'apono_connector'@'%';
    GRANT PROCESS ON *.* TO 'apono_connector'@'%';
    Set the
    REGION
    environment variable.
    1. Run the following command to deploy the connector on your ACI.

    1. Add the User Access Administrator role to the connector in the subscription scope.

    1. If your Azure resources have resource locksarrow-up-right applied, assign the Tag Contributor role to the connector at the subscription scope. This allows Apono to add a tag marker during the grant or revoke process.

    1. For Azure AD, add the Director Readers role to the connector. For Azure AD Groups, add the Groups Administrator and Privileged Role Administrator roles.

    1. On the Connectorsarrow-up-right page, verify that the connector has been updated.

    You can now create integrate with an Azure Management Group or Azure Subscription.

    Copy the token listed on the page in step 1.

    =
    "
    <APONO_TOKEN>
    "
    $env:SUBSCRIPTION_ID = "<AZURE_SUBSCRIPTION_ID>"
    $env:RESOURCE_GROUP_NAME = "<AZURE_RESOURCE_GROUP_NAME>"
    $env:MANAGEMENT_GROUP_NAME = "<AZURE_MANAGEMENT_GROUP_NAME>"
    herearrow-up-right
    resource locksarrow-up-right
    Connectorsarrow-up-right
    Azure Management Group or Azure Subscription
    Connectorsarrow-up-right
    Toolarrow-up-right
    Subscription IDarrow-up-right
    Management Group Namearrow-up-right
    Resource group namearrow-up-right
    Azure rolearrow-up-right
    Microsoft Entra rolearrow-up-right
    $REGION=$(Get-AzResourceGroup -Name $RESOURCE_GROUP_NAME).Location
    $port = New-AzContainerInstancePortObject -Port 80 -Protocol TCP
    
    $env_var1 = New-AzContainerInstanceEnvironmentVariableObject -Name "APONO_CONNECTOR_ID" -Value $APONO_CONNECTOR_ID
    
    $env_var2 = New-AzContainerInstanceEnvironmentVariableObject -Name "APONO_TOKEN" -Value $APONO_TOKEN
    
    $env_var3 = New-AzContainerInstanceEnvironmentVariableObject -Name "APONO_URL" -Value "api.apono.io"
    
    $jsonValue = @{
        cloud_provider = "AZURE"
        subscription_id = $SUBSCRIPTION_ID
        resource_group = $RESOURCE_GROUP_NAME
        region = $REGION
        is_azure_admin = $true
    } | ConvertTo-Json -Compress
    
    $env_var4 = New-AzContainerInstanceEnvironmentVariableObject -Name "CONNECTOR_METADATA" -Value $jsonValue
    
    $container = New-AzContainerInstanceObject -Image registry.apono.io/apono-connector:v1.7.6 -Name $APONO_CONNECTOR_ID -Port @
    
    $imageRegistryCredential = New-AzContainerGroupImageRegistryCredentialObject -Server "registry.apono.io" -Username "apono" -Password (
    
    $PRINCIPAL_ID=$(New-AzContainerGroup -SubscriptionId $SUBSCRIPTION_ID -ResourceGroupName $RESOURCE_GROUP_NAME -Name $APONO_CONNECTOR_ID -

    List all your Atlas projects and their IDs.

    1. For each additional project ID, assign the public API key. Be sure to replace #API_KEY_ID with your public API key from step 2 and #PROJECT_ID with the project ID of the additional project to associate with the API key.

    1. Create a secret with the credentials from step 2. Use the following key-value pair structure when generating the secret. Be sure to replace #PUBLIC_KEY and #PRIVATE_KEY with actual values.

    circle-check

    You can also input the user credentials directly into the Apono UI during the integration process.

    You can now integrate MongoDB Atlas.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

    Kubernetes

    Private Endpoint IDarrow-up-right

    "
    #PROJECT_ID
    "
    --role
    GROUP_OWNER

    Project Id

    Unique identifier assigned to each project within MongoDB Atlas

    Cluster Name

    Name for a database cluster in MongoDB Atlas, serving as an identifier within a project

    Cluster Type

    Configuration of a MongoDB Atlas cluster

    Private Endpoint Id

    (Optional) Unique identifier for a private endpoint in MongoDB Atlas

    Credential rotation period (in days)

    (Optional) Number of days after which the database credentials must be rotated

    Learn more about the Credentials Rotation Policy.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    Create a secret
    integration process
    integrate MongoDB Atlas
    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    AWS
    Azure
    GCP
    Command-line interfacearrow-up-right
    Project IDarrow-up-right
    Cluster Namearrow-up-right
    Cluster Typearrow-up-right
    "public_key": "#PUBLIC_KEY",
    "private_key": "#PRIVATE_KEY"
    atlas projects apiKeys create --desc cli-created --projectId "#PROJECT_ID" --role GROUP_OWNER

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    atlas projects list
    atlas projects apiKeys assign #API_KEY_ID --role GROUP_OWNER --projectId #PROJECT_ID
    "public_key": "#PUBLIC_KEY",
    "private_key"

    Use apono-connector-iam-sa@[PROJECT_ID].iam.gserviceaccount.com for the Principal.

    This authentication method does not grant the user account database privileges.

    circle-exclamation

    Be sure that the Apono connector GCP service account (apono-connector-iam-sa@[PROJECT_ID].iam.gserviceaccount.com) has the Cloud SQL Admin role.

    Authorization type for the MySQL service account user:

    • IAM Auth: Cloud IAM authentication

    • User / Password: Built-in authentication

    Project ID

    ID of the project where the PostgreSQL instance is deployed

    Region

    Location where the PostgreSQL instance is deployed

    Instance ID

    ID of the PostgreSQL instance

    Instance ID User Override

    (Optional) Allows overriding the instance ID for the user

    Database Name

    Name of the database to integrate By default, Apono sets this value to postgre.

    SSL Mode

    (Optionl) Mode of Secure Sockets Layer (SSL) encryption used to secure the connection with the SQL database server:

    • require: An SSL-encrypted connection must be used.

    • allow: An SSL-encrypted or unencrypted connection is used. If an SSL-encrypted connection is unavailable, the unencrypted connection is used.

  • Click Next. The Secret Store section expands.

  • (User/Password only) Associate the secret or credentials.

  • User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

  • Click Confirm.

  • Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    Apono Connector

    On-prem connection serving as a bridge between your Google Cloud PostgreSQL databases and Apono Minimum Required Version: 1.4.1 Use the following steps to update an existing connector.

    Cloud SQL Admin API

    APIarrow-up-right for managing database instances with resources, such as BackupRuns, Databases, and Instances

    Cloud SQL Admin Role

    (Cloud IAM authentication only) Google Cloud role that the Apono connector's service user must have at the instance's project or organization level

    PostgreSQL Info

    Information for the database instance to be integrated:

    • Project IDarrow-up-right

    • Dataset Namearrow-up-right

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the Credentials Rotation Policy.

    create a new userarrow-up-right
    Create a secret
    Catalogarrow-up-right
    GCP
    create access flows
    Google Cloud SQL - PostgreSQL

    Auth Type

    2 uppercase letter
  • 2 numbers (0-9)

  • 2 special characters

  • Cannot have 3 consecutive identical characters

  • Have 4 different characters than the previous password

  • Cannot contain, repeat, or reverse the user name

  • Hostname of the Oracle Database instance to connect

    Port

    Port value for the instance By default, Apono sets this value to 1521.

    Service Name

    Name of the service By default, Apono sets this value to ORCL.

  • Click Next. The Secret Store section expands.

  • Associate the secret or credentials.

  • Click Next. The Get more with Apono section expands.

  • Define the Get more with Apono settings.

    Setting
    Description

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

  • Click Confirm.

  • Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    Apono Connector

    On-prem connection serving as a bridge between an Oracle Database instance and Apono:

    • AWS

    • Azure

    • GCP

    Oracle Database Information

    Information for the database instance to be integrated:

    • Hostname

    • Port number

    Admin access to Oracle

    The Admin must be able to create users and manage roles in Oracle

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    create a secret
    integration process
    integrate Oracle Database
    Catalogarrow-up-right
    AWS
    Azure
    GCP
    Kubernetes
    create access flows
    Oracle Database tile

    Hostname

    circle-info

    If a confirmation prompt appears after running the following command, enter y.

    1. Set the WinRM service configuration to allow unencrypted traffic.

    1. Enable basic authentication for the WinRM service. Basic authentication transmits credentials in cleartext.Shell

    You can now integrate an RDP server.

    Follow these steps to configure the RDP server:

    1. Add the WinRM port 5985 to the allowlist in the server firewall.

    2. Turn on the WinRM firewall rule in the Windows server.

    3. Analyze and configure the WinRM service to allow remote management on the local machine.

    circle-info

    If a confirmation prompt appears after running the following command, enter y.

    1. Enable basic authentication for the WinRM service. Basic authentication transmits credentials in cleartext.

    1. Configure WinRM HTTPS access on the target machine.

    circle-info

    Configuring WinRM to use HTTPS encrypts data transmitted between the client and server, protecting sensitive information from interception. To enable HTTPS, ensure a valid server authentication certificate is installed on the target machine.

    You can now .

    From the dropdown menu, select a connector.

    DNS name or IP address of the RDP server to connect

    WinRM Port

    WinRM port value for the server By default, Apono sets this value to 5985.

    RDP Port

    (Optional) RDP port value By default, Apono sets this value to 3389.

    Use SSL connection

    Encrypted or unencrypted connection indicator Possible Values:

    • false: Unencrypted (unsecure) connection

    • true: Encrypted (secure) connection

  • Click Next. The Secret Store section expands.

  • Associate the secret or credentials.

  • Click Next. The Get more with Apono section expands.

  • Define the Get more with Apono settings.

    Setting
    Description

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

  • Click Confirm.

  • Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    Apono Connector

    On-prem connection serving as a bridge between an RDP server and Apono:

    • AWS

    • Azure

    • GCP

    User

    Microsoft RDP user for Apono The RDP user must be one of the following:

    • Admin user

    • Custom role user with the following permissions:

      • GenericRead

      • ListChildren

      • CreateChild

      • DeleteChild

      • ListObject

      • WriteMember

      • ResetPassword

      • Delete

    Secret

    Value generated with the credentials of the user you create

    Create your secret based on the connector you are using.

    You can also input the user credentials directly.

    Apono does not store credentials. The Apono Connector uses the secret to communicate with services in your environment and separate the Apono web app from the environment for maximal security.

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    unencrypted
    encrypted
    Catalogarrow-up-right
    AWS
    Azure
    GCP
    Kubernetes
    create access flows
    RDP tile

    Host

    winrm quickconfig
    winrm set winrm/config/service @{AllowUnencrypted="true"}
    winrm set winrm/config/service/Auth @{Basic="true"}
    Description

    Apono Connector

    On-prem with network access to your AWS RDS for PostgreSQL instances Minimum Required Version: 1.5.3 Use the following steps to .

    NOTE: When installing the Apono connector with CloudFormation, the AWS RDS database policy is automatically created.

    If you do not use CloudFormation, you must create the following policy and assign it to the Apono connector role.

    PostgreSQL Info

    Information for the database instance to be integrated:

    • Instance ID

    • Database Name

    AWS Tag

    (Optional) Metadata label assigned to AWS resources Adding an AWS tag, enables Apono to discover and add resources on your behalf. When , use the following information:

    • Tag key: apono-secret

    • Value: ()


    hashtag
    Create an AWS RDS PostgreSQL user

    You must create a user in your AWS RDS PostgreSQL instance for the Apono connector and grant that user permissions to your databases.

    Follow these steps to create a user and grant it database permissions:

    1. Create a new user with either Built-in authentication or IAM authentication.

    circle-exclamation

    You can use only one authentication option on the RDS instance at a time.

    Built-in authentication identifies a user through a username and password.

    CREATE USER apono_connector WITH PASSWORD 'secret_passwd';

    Be sure to select a strong password for the user.

    After enabling IAMarrow-up-right on your RDS instance, create an AWSAuthenticationPlugin user for the Apono connector. AWSAuthenticationPlugin is an AWS-provided plugin that works seamlessly with IAM to authenticate your users.

    To create the user, run the following commands from your Postgre client.

    CREATE USER apono_connector;
    GRANT rds_iam TO apono_connector;
    1. From your preferred client tool, grant rds_superuser access to the user.

    Permission
    Description

    ALTER USER apono_connector WITH CREATEROLE;

    Allows Apono connector to create, alter, and drop user roles

    GRANT rds_superuser TO apono_connector;

    Assigns the RDS superuser role to the Apono connector, providing comprehensive permissions for database management

    1. (IAM authentication only) Create and attach the following IAM policy to your identity center permissions set or role.

    1. (Built-in authentication only) Create an AWS secret with the credentials from step 1.

    circle-info

    When using IAM authentication, a secret does not need to be created.

    The service account and its permissions are managed through IAM roles and policies. The service account is used to authenticate the PostgreSQL instance instead of a secret.


    hashtag
    Integrate Amazon RDS for PostgreSQL

    AWS RDS PostgreSQL
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click AWS RDS PostgreSQL. The Connect Integration page appears.

    2. Under Discovery, click one or more resource types to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage Access Flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-info

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating an AWS connector.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    circle-info

    A secret is not needed for IAM authentication.

    1. Click Next. The Get more with Apono section expands.

    2. Define the Get more with Apono settings.

      Setting
      Description

      Credential Rotation

      (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    3. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your RDS for PostgreSQL database.

    Apono Connector

    On-prem serving as a bridge between a Google Cloud instance and Apono

    Apono Premium

    providing the most features and dedicated account support

    Google User Account

    User account with

    Google Cloud Command Line Interface (Google Cloud CLI)

    used to manage Google Cloud resources

    Google Cloud Information

    Information for your Google Cloud instance associated with the Apono connector Google-defined:

    • (Organization)

    User-defined


    hashtag
    Associate BigQuery dataset permissions

    Google BigQuery is a fast, scalable, secure, fully managed data warehouse service in the cloud, serving as a primary data store for vast datasets and analytic workloads.

    To add this resource to your Google Project or Organization, you must create a custom role with BigQuery dataset permissions and assign the role to the service account for the Apono connector.

    circle-check

    The following instructions in this section use the Google Cloud CLI.

    However, you can also create a custom rolearrow-up-right through the Google Console, and IAM client library, or the REST API. Additionally, you can assign the custom rolearrow-up-right to the Apono connector through the Google Console.

    Follow these steps to associate the permissions through the Google Cloud CLI:

    1. In your shell environment, log in to Google Cloud and enable the API.

    2. Set the environment variables.

    1. Create the custom role. Be sure to replace the placeholders (<ROLE_ID>, <TITLE>, and <DESCRIPTION>) with actual values of your choosing for the role ID, title, and description of the role.

    1. Using the role ID defined in the previous step, assign the custom role to the Apono connector service account.


    hashtag
    Enable the Cloud Asset API

    To manage and monitor your cloud assets, you must enable the Cloud Asset API.

    Follow these steps to enable this API:

    1. In your shell environment, log in to Google Cloud and enable the API.


    hashtag
    Integrate with GCP

    hashtag
    Organization

    Google Organization environment option
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to integrate Apono with your GCP organization:

    1. On the Catalogarrow-up-right tab, click GCP. The Connect Integrations Group page appears.

    2. Under Discovery, click Google Organization.

    3. Click one or more resource types to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector. Choosing a connector links Apono to the roles available in the organization where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating an Apono connector.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Get more with Apono section expands.

    4. Define the Get more with Apono settings.

      Setting
      Description
    5. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    After connecting your GCP organization to Apono, you will be redirected to the Connected tab to view your integrations. The new GCP integration will initialize once it completes its first data fetch. Upon completion, the integration will be marked Active.

    Now that you have completed this integration, you can create access flows that grant permission to GCP organizational roles.

    hashtag
    Project

    Google Project environment option
    circle-info

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to integrate Apono with your GCP project:

    1. On the Catalogarrow-up-right tab, click GCP. The Connect Integrations Group page appears.

    2. Under Discovery, click Google Project.

    3. Click one or more resource types to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector. Choosing a connector links Apono to the roles available in the organization where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating an Apono connector.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Get more with Apono section expands.

    4. Define the Get more with Apono settings.

      Setting
      Description
    5. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    After connecting your GCP project to Apono, you will be redirected to the Connected tab to view your integrations. The new GCP integration will initialize once it completes its first data fetch. Upon completion, the integration will be marked Active.

    Now that you have completed this integration, you can create access flows that grant permission to GCP organizational roles.

    Item

    Description

    Apono Connector

    On-prem connection serving as a bridge between a Windows DC server and Apono:

    • ​​

    • ​

    • ​​

    User

    Windows Domain Controller user for Apono The Windows Domain Controller user must be one of the following:

    • Admin user

    • Custom role user with the following permissions:

      • GenericRead

    Secret

    Value generated with the credentials of the user you create based on the connector you are using. ​

    Apono does not store credentials. The Apono Connector uses the secret to communicate with services in your environment and separates the Apono web app from the environment for maximal .

    ​


    hashtag
    Configure the Windows Domain Controller

    Before you begin integrating Windows Domain Controller with Apono, you must allow remote access with the Windows Remote Management (WinRM) service on your Windows machine.

    You can allow unencrypted or encrypted communication.

    hashtag
    Unencrypted Communication

    Follow these steps to configure the Windows Domain Controller:

    1. Add the WinRM port 5985 to the allowlist in the server firewall.

    2. Turn on the WinRM firewall rule in the Windows server.

    3. Analyze and configure the WinRM service to allow remote management on the local machine.

    circle-info

    If a confirmation prompt appears after running the following command, enter y.

    1. Set the WinRM service configuration to allow unencrypted traffic.

    1. Enable basic authentication for the WinRM service. Basic authentication transmits credentials in cleartext.

    You can now integrate the Windows Domain Controller.

    hashtag
    Encrypted Communication

    Follow these steps to configure the Windows Domain Controller:

    1. Add the WinRM port 5985 to the allowlist in the server firewall.

    2. Turn on the WinRM firewall rule in the Windows server.

    3. Analyze and configure the WinRM service to allow remote management on the local machine.

    circle-info

    If a confirmation prompt appears after running the following command, enter y.

    1. Enable basic authentication for the WinRM service. Basic authentication transmits credentials in cleartext.Shell

    1. Configure WinRM HTTPS access on the target machine.

    circle-info

    Configuring WinRM to use HTTPS encrypts data transmitted between the client and server, protecting sensitive information from interception. To enable HTTPS, ensure a valid server authentication certificate is installed on the target machine.

    You can now integrate the Windows Domain Controller.


    hashtag
    Integrate a Windows Domain Controller

    circle-exclamation

    WinRM HTTPS requires a local computer Server Authentication certificate with a CN matching the hostname to be installed. The certificate must not be expired, revoked, or self-signed.

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click Windows Domain Controller. The Connect Integration page appears.

    2. Under Discovery, select one or multiple resource types for Apono to discover in all instances of the environment.

    3. Click Next. The Apono connector section expands.

    4. From the dropdown menu, select a connector.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWS, Azure, GCP, Kubernetes).

    1. Click Next. The Integration Config page appears.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. .

    5. Click Next. The Get more with Apono section expands.

    6. Define the Get more with Apono settings.

      Setting
      Description
    7. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your Windows Domain Controller.

    AlloyDB

    Create an integration to manage access to an AlloyDB instance

    AlloyDB is a fully managed PostgreSQL-compatible database service on Google Cloud. It offers high performance, scalability, and reliability for demanding enterprise workloads.

    Through this integration, Apono helps you securely manage access to your AlloyDB instance.


    hashtag
    Prerequisites

    Item
    Description

    hashtag
    Assign roles to the Apono connector

    Use the following tabs to assign roles to the Apono connector for either your or .

    Follow these steps to assign roles to the Apono connector:

    1. In your shell environment, log in to Google Cloud and enable the API.

    1. Set the environment variables.


    hashtag
    Create an AlloyDB user

    You must create a user in your AlloyDB instance for the Apono connector and grant that user permissions.

    Use the following steps to create a user for the Apono connector and grant it permissions:

    1. Create a new user and grant permissions with either or .

    Run the following commands from your PostgreSQL client.

    1. In the Google Cloud console, enable IAM authentication for your AlloyDB instance by setting the alloydb.iam_authentication flag to on.

    2. Run the following command to grant superuser privileges to the Apono connector user.

    1. (Built-in Authentication only) with the credentials from step 1.

    circle-check

    When using IAM authentication, the service account and its permissions are managed through Google Cloud IAM roles and policies.

    A secret does not need to be created.


    hashtag
    Integrate AlloyDB

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click AlloyDB. The Connect Integration page appears.

    2. Under Discovery, select one or multiple resource types for Apono to discover in the instance.

    3. Click Next. The Apono connector section expands.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a .

    1. Click Next. The Integration Config page appears.

    2. Define the Integration Config settings.

    Setting
    Description
    1. Click Next. The Secret Store section expands.

    2. .

    3. Click Next. The Get more with Apono section expands.

    Setting
    Description
    1. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    Now that you have completed this integration, you can create that grant permission to your AlloyDB instance.

    AWS Lambda Custom Integration

    Learn how to integrate an AWS Lambda Custom Integration with Apono

    AWS Lambda enables you to build and connect cloud services and internal web apps by writing single-purpose functions that are attached to events emitted from your cloud infrastructure and services.

    Its serverless architecture frees you to write, test, and deploy functions quickly without having to manage infrastructure setup.

    With this integration, you can connect your internal applications to AWS Lambda functions and manage access to those applications with Apono.


    hashtag

    Snowflake

    Create an integration to manage access to a Snowflake instance

    Snowflake is a fully managed, cloud-based data platform that functions as a data warehouse, data lake, and data sharing solution. With features such as automatic scaling, secure data sharing, and robust data integration, Snowflake offers high performance and flexibility, ensuring seamless data management and analytics.

    Through this integration, Apono helps you securely manage access to your Snowflake instance.


    hashtag
    Prerequisites

    Integrate with Azure Management Group or Subscription

    Create an integration to manage access to your Azure services

    Apono offers Azure users a simple way to centralize cloud management through our platform. Through a single integration, you can manage multiple Azure services across various management groups and subscriptions.


    hashtag
    Prerequisites

    Item
    Description

    Integrate with AKS

    Create an integration to manage access to a Kubernetes cluster on Azure

    With Azure Kubernetes Service (AKS) on Microsoft Azure, AKS simplifies the management complexities of Kubernetes.

    Through this integration, Apono helps you securely manage access to your Microsoft Azure Kubernetes cluster.


    hashtag
    Prerequisites

    Item
    $accessToken = (Get-AzAccessToken -ResourceUrl "https://graph.microsoft.com").Token
    
    $payload = @{
        principalId = $PRINCIPAL_ID
        roleDefinitionId = "88d8e3e3-8f55-4a1e-953a-9b9898b8876b"
        directoryScopeId = "/"
    } | ConvertTo-Json -Depth 3
    
    $headers = @{
        "Authorization" = "Bearer $accessToken"
        "Content-Type"  = "application/json"
    }
    
    Invoke-RestMethod -Method POST -Uri "https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments" -Headers $headers -Body $payload
    $accessToken = (Get-AzAccessToken -ResourceUrl "https://graph.microsoft.com").Token
    
    $headers = @{
        "Authorization" = "Bearer $accessToken"
        "Content-Type"  = "application/json"
    }
    
    $payload1 = @{
        principalId       = $PRINCIPAL_ID
        roleDefinitionId  = "fdd7a751-b60b-444a-984c-02652fe8fa1c"  # Role ID 1
        directoryScopeId  = "/"
    } | ConvertTo-Json -Depth 3
    
    Invoke-RestMethod -Method POST -Uri "https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments" -Headers $headers -Body $payload1
    
    $payload2 = @{
        principalId       = $PRINCIPAL_ID
        roleDefinitionId  = "e8611ab8-c189-46e8-94e1-60213ab1f814"  # Role ID 2
        directoryScopeId  = "/"
    } | ConvertTo-Json -Depth 3
    
    Invoke-RestMethod -Method POST -Uri "https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments" -Headers $headers -Body $payload2
    New-AzRoleAssignment -ObjectId $PRINCIPAL_ID `
        -ObjectType "ServicePrincipal" `
        -RoleDefinitionName "User Access Administrator" `
        -Scope "/providers/Microsoft.Management/managementGroups/$env:MANAGEMENT_GROUP_NAME"
    New-AzRoleAssignment -ObjectId $PRINCIPAL_ID `
        -ObjectType "ServicePrincipal" `
        -RoleDefinitionName "Tag Contributor" `
        -Scope "/providers/Microsoft.Management/managementGroups/$env:MANAGEMENT_GROUP_NAME"
    $accessToken = (Get-AzAccessToken -ResourceUrl "https://graph.microsoft.com").Token
    
    $payload = @{
        principalId = $PRINCIPAL_ID
        roleDefinitionId = "88d8e3e3-8f55-4a1e-953a-9b9898b8876b"
        directoryScopeId = "/"
    } | ConvertTo-Json -Depth 3
    
    $headers = @{
        "Authorization" = "Bearer $accessToken"
        "Content-Type"  = "application/json"
    }
    
    Invoke-RestMethod -Method POST -Uri "https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments" -Headers $headers -Body $payload
    $accessToken = (Get-AzAccessToken -ResourceUrl "https://graph.microsoft.com").Token
    
    $headers = @{
        "Authorization" = "Bearer $accessToken"
        "Content-Type"  = "application/json"
    }
    
    $payload1 = @{
        principalId       = $PRINCIPAL_ID
        roleDefinitionId  = "fdd7a751-b60b-444a-984c-02652fe8fa1c"  # Role ID 1
        directoryScopeId  = "/"
    } | ConvertTo-Json -Depth 3
    
    Invoke-RestMethod -Method POST -Uri "https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments" -Headers $headers -Body $payload1
    
    $payload2 = @{
        principalId       = $PRINCIPAL_ID
        roleDefinitionId  = "e8611ab8-c189-46e8-94e1-60213ab1f814"  # Role ID 2
        directoryScopeId  = "/"
    } | ConvertTo-Json -Depth 3
    
    Invoke-RestMethod -Method POST -Uri "https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments" -Headers $headers -Body $payload2
    $env:APONO_CONNECTOR_ID = "<A_UNIQUE_CONNECTOR_NAME>"
    $env:APONO_TOKEN = "<APONO_TOKEN>"
    $env:SUBSCRIPTION_ID = "<AZURE_SUBSCRIPTION_ID>"
    $env:RESOURCE_GROUP_NAME = "<AZURE_RESOURCE_GROUP_NAME>"
    Connect-AzAccount
    $env:REGION=$(Get-AzResourceGroup -Name $env:RESOURCE_GROUP_NAME).Location
    $port = New-AzContainerInstancePortObject -Port 80 -Protocol TCP
    
    $env_var1 = New-AzContainerInstanceEnvironmentVariableObject -Name "APONO_CONNECTOR_ID" -Value $env:APONO_CONNECTOR_ID
    
    $env_var2 = New-AzContainerInstanceEnvironmentVariableObject -Name "APONO_TOKEN" -Value $env:APONO_TOKEN
    
    $env_var3 = New-AzContainerInstanceEnvironmentVariableObject -Name "APONO_URL" -Value "api.apono.io"
    
    $jsonValue = @{
        cloud_provider = "AZURE"
        subscription_id = $env:SUBSCRIPTION_ID
        resource_group = $env:RESOURCE_GROUP_NAME
        region = $env:REGION
        is_azure_admin = $true
    } | ConvertTo-Json -Compress
    
    $env_var4 = New-AzContainerInstanceEnvironmentVariableObject -Name "CONNECTOR_METADATA" -Value $jsonValue
    
    $container = New-AzContainerInstanceObject -Image registry.apono.io/apono-connector:v1.7.6 -Name $env:APONO_CONNECTOR_ID -Port @($port) -EnvironmentVariable @($env_var1, $env_var2, $env_var3, $env_var4) -RequestCpu 1 -RequestMemoryInGb 2
    
    $imageRegistryCredential = New-AzContainerGroupImageRegistryCredentialObject -Server "registry.apono.io" -Username "apono" -Password (ConvertTo-SecureString $env:APONO_TOKEN -AsPlainText -Force)
    
    $PRINCIPAL_ID=$(New-AzContainerGroup -SubscriptionId $env:SUBSCRIPTION_ID -ResourceGroupName $env:RESOURCE_GROUP_NAME -Name $env:APONO_CONNECTOR_ID -Container $container -OsType Linux -ImageRegistryCredential $imageRegistryCredential -Location $env:REGION -IdentityType "SystemAssigned").IdentityPrincipalId
    New-AzRoleAssignment -ObjectId $PRINCIPAL_ID `
        -ObjectType "ServicePrincipal" `
        -RoleDefinitionName "User Access Administrator" `
        -Scope "/subscriptions/$env:SUBSCRIPTION_ID"
    New-AzRoleAssignment -ObjectId $PRINCIPAL_ID `
        -ObjectType "ServicePrincipal" `
        -RoleDefinitionName "Tag Contributor" `
        -Scope "/subscriptions/$env:SUBSCRIPTION_ID"
    ALTER ROLE "<CONNECTOR_USERNAME>" WITH CREATEROLE;
    GRANT cloudsqlsuperuser TO "<CONNECTOR_USERNAME>";
    DO $$
    DECLARE
      database_name text;
    BEGIN
      FOR database_name IN (SELECT datname FROM pg_database WHERE datname != 'template0' AND datname != 'cloudsqladmin') LOOP
        EXECUTE 'GRANT ALL PRIVILEGES ON DATABASE ' || quote_ident(database_name) || ' TO cloudsqlsuperuser WITH GRANT OPTION';
      END LOOP;
    END; $$
    
    DO $$
    DECLARE
      schema text;
    BEGIN
      FOR schema IN (SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT LIKE 'pg_%' AND schema_name != 'information_schema' AND schema_name != 'cron') LOOP
        EXECUTE 'GRANT ALL PRIVILEGES ON SCHEMA ' || quote_ident(schema) || ' TO cloudsqlsuperuser WITH GRANT OPTION';
        EXECUTE 'GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA ' || quote_ident(schema) || ' TO cloudsqlsuperuser WITH GRANT OPTION';
        EXECUTE 'GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA ' || quote_ident(schema) || ' TO cloudsqlsuperuser WITH GRANT OPTION';
        EXECUTE 'GRANT ALL PRIVILEGES ON ALL FUNCTIONS IN SCHEMA ' || quote_ident(schema) || ' TO cloudsqlsuperuser WITH GRANT OPTION';
      END LOOP;
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON TABLES TO cloudsqlsuperuser WITH GRANT OPTION';
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON SEQUENCES TO cloudsqlsuperuser WITH GRANT OPTION';
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON FUNCTIONS TO cloudsqlsuperuser WITH GRANT OPTION';
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON SCHEMAS TO cloudsqlsuperuser WITH GRANT OPTION';
    END; $$
    DO $$
    DECLARE
      schema text;
    BEGIN
      FOR schema IN (SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT LIKE 'pg_%' AND schema_name != 'information_schema' AND schema_name != 'cron') LOOP
        EXECUTE 'GRANT ALL PRIVILEGES ON SCHEMA ' || quote_ident(schema) || ' TO cloudsqlsuperuser WITH GRANT OPTION';
        EXECUTE 'GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA ' || quote_ident(schema) || ' TO cloudsqlsuperuser WITH GRANT OPTION';
        EXECUTE 'GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA ' || quote_ident(schema) || ' TO cloudsqlsuperuser WITH GRANT OPTION';
        EXECUTE 'GRANT ALL PRIVILEGES ON ALL FUNCTIONS IN SCHEMA ' || quote_ident(schema) || ' TO cloudsqlsuperuser WITH GRANT OPTION';
      END LOOP;
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON TABLES TO cloudsqlsuperuser WITH GRANT OPTION';
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON SEQUENCES TO cloudsqlsuperuser WITH GRANT OPTION';
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON FUNCTIONS TO cloudsqlsuperuser WITH GRANT OPTION';
      EXECUTE 'ALTER DEFAULT PRIVILEGES GRANT ALL PRIVILEGES ON SCHEMAS TO cloudsqlsuperuser WITH GRANT OPTION';
    END; $$
    CREATE USER apono_connector IDENTIFIED BY password;
    ALTER USER apono_connector DEFAULT TABLESPACE users;
    ALTER USER apono_connector TEMPORARY TABLESPACE temp;
    ALTER USER apono_connector QUOTA UNLIMITED ON users;
    GRANT CREATE SESSION TO apono_connector;
    GRANT CONNECT, RESOURCE TO apono_connector;
    GRANT CREATE USER TO apono_connector;
    GRANT ALTER USER TO apono_connector;
    GRANT DROP USER TO apono_connector;
    GRANT ALTER SYSTEM TO apono_connector;
    GRANT SELECT_CATALOG_ROLE TO apono_connector;
    GRANT GRANT ANY ROLE TO apono_connector;
    GRANT CREATE ROLE TO apono_connector;
    GRANT DROP ANY ROLE TO apono_connector;
    GRANT GRANT ANY OBJECT PRIVILEGE TO apono_connector;
    GRANT GRANT ANY PRIVILEGE TO apono_connector;  
    ALTER USER apono_connector WITH CREATEROLE;
    GRANT rds_superuser TO apono_connector;
    {
         "Version": "2012-10-17",
         "Statement": [
             {
                 "Effect": "Allow",
                 "Action": [
                     "rds-db:connect"
                 ],
                 "Resource": [
                     "arn:aws:rds-db:*:*:dbuser:*/${SAML:sub}"
                 ]
             },
             {
                 "Effect": "Allow",
                 "Action": [
                     "rds:DescribeDBInstances"
                 ],
                 "Resource": [
                     "arn:aws:rds:*:*:db:*"
                 ]
             }
         ]
     }
    gcloud auth login
    gcloud services enable cloudresourcemanager.googleapis.com
    gcloud services enable iam.googleapis.com
    export GCP_PROJECT_ID=<GOOGLE_PROJECT_ID>
    export SERVICE_ACCOUNT_NAME=<SERVICE_ACCOUNT_NAME>
    export GCP_ORGANIZATION_ID=<GOOGLE_ORGANIZATION_ID>
    export GCP_PROJECT_ID=<GOOGLE_PROJECT_ID>
    export SERVICE_ACCOUNT_NAME=<SERVICE_ACCOUNT_NAME>
    gcloud iam roles create <ROLE_ID> --project=$GCP_PROJECT_ID --title="<TITLE>" --description="<DESCRIPTION>" --permissions=bigquery.datasets.get,bigquery.datasets.update,bigquery.datasets.getIamPolicy,bigquery.datasets.setIamPolicy --stage=ALPHA
    gcloud iam roles create <ROLE_ID> --organization=$GCP_ORGANIZATION_ID --title="<TITLE>" --description="<DESCRIPTION>" --permissions=bigquery.datasets.get,bigquery.datasets.update,bigquery.datasets.getIamPolicy,bigquery.datasets.setIamPolicy --stage=ALPHA
    gcloud projects add-iam-policy-binding $GCP_PROJECT_ID --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" --role="projects/$GCP_PROJECT_ID/roles/<ROLD_ID>"
    gcloud organizations add-iam-policy-binding $GCP_ORGANIZATION_ID --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" --role="organizations/$GCP_ORGANIZATION_ID/roles/<ROLE_ID>"
    gcloud auth login
    gcloud services enable cloudasset.googleapis.com --project=<GOOGLE_PROJECT_ID>
    winrm quickconfig
    winrm set winrm/config/service @{AllowUnencrypted="true"}
    winrm set winrm/config/service/Auth @{Basic="true"}
    winrm quickconfig
    winrm set winrm/config/service/Auth @{Basic="true"}
    ($
    port
    )
    -
    EnvironmentVariable
    @
    ($
    env_var1
    ,
    $
    env_var2
    ,
    $
    env_var3
    ,
    $
    env_var4
    )
    -
    RequestCpu
    1
    -
    RequestMemoryInGb
    2
    ConvertTo-SecureString
    $
    APONO_TOKEN
    -
    AsPlainText
    -
    Force
    )
    Container
    $
    container
    -
    OsType Linux
    -
    ImageRegistryCredential
    $
    imageRegistryCredential
    -
    Location
    $
    REGION
    -
    IdentityType
    "
    SystemAssigned
    "
    )
    .IdentityPrincipalId
    must be defined.
    Integration Owner
    must also be defined.
    :
    "
    #PRIVATE_KEY
    "
    resource owner
    resource owners
    disable
    : An unencrypted connection is used.
  • prefer: An SSL-encrypted connection is attempted. If the encrypted connection is unavailable, the unencrypted connection is used.

  • verify-ca: An SSL-encrypted connection must be used and a server certification verification against the provided CA certificates must pass.

  • verify-full: An SSL-encrypted connection must be used and a server certification verification against the provided CA certificates must pass. Additionally, the server hostname is checked against the certificate's names.

  • must be defined.
    Integration Owner
    must also be defined.
    Periodic User Cleanup & Deletion
    resource owner
    resource owners

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Kubernetes
    Credentials Rotation Policy
    Periodic User Cleanup & Deletion

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    integrate an RDP server
    Kubernetes
    Credentials Rotation Policy
    Periodic User Cleanup & Deletion
    winrm quickconfig
    winrm set winrm/config/service/Auth @{Basic="true"}

    Auth Type

    Authorization type for the MySQL service account user:

    • IAM Auth: IAM authentication

    • User / Password: Built-in authentication

    Region

    Location where the PostgreSQL database is deployed

    Instance ID

    ID of the PostgreSQL instance

    Database Name

    Name of the PostgreSQL database

    SSL Mode

    (Optional) Mode of Secure Sockets Layer (SSL) encryption used to secure the connection with the SQL database server

    • require: An SSL-encrypted connection must be used.

    • allow: An SSL-encrypted or unencrypted connection is used. If an SSL encrypted connection is unavailable, the unencrypted connection is used.

    • disable: An unencrypted connection is used.

    • prefer: An SSL-encrypted connection is attempted. If the encrypted connection is unavailable, the unencrypted connection is used.

    • verify-ca: An SSL-encrypted connection must be used and a server certification verification against the provided CA certificates must pass.

    • verify-full: An SSL-encrypted connection must be used and a server certification verification against the provided CA certificates must pass. Additionally, the server hostname is checked against the certificate's names.

    Enable Audit

    (Optional) Feature that allows Apono to ingest and aggregate session audit logs

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Associate the secret or credentials.
    Integration Config Metadataarrow-up-right
    connectionarrow-up-right
    update an existing connector
    adding an AWS tagarrow-up-right
    AWS Secret
    Credentials Rotation Policy
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Action": "rds-db:connect",
                "Resource": "arn:aws:rds-db:*:*:dbuser:*/apono_connector",
                "Effect": "Allow"
            }
        ]
    }

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

    Service Account Name

    Organization ID

    GCP organization IDarrow-up-right

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the Credentials Rotation Policy.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    Project ID

    GCP project IDarrow-up-right

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Integration Config Metadataarrow-up-right
    Integration Config Metadataarrow-up-right
    connection
    Apono planarrow-up-right
    owner permissionsarrow-up-right
    Command-line interfacearrow-up-right
    Organization IDarrow-up-right
    Project IDarrow-up-right

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

    ​Kubernetes​

  • ListChildren

  • CreateChild

  • DeleteChild

  • ListObject

  • WriteMember

  • ResetPassword

  • Delete

  • Host

    DNS name or IP address of the RDP server to connect

    WinRM Port

    WinRM port value for the server By default, Apono sets this value to 5985.

    RDP Port

    (Optional) RDP port value By default, Apono sets this value to 3389.

    Use SSL connection

    Encrypted or unencrypted connection indicator Possible Values:

    • false: Unencrypted (unsecure) connection

    • true: Encrypted (secure) connection

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the Credentials Rotation Policy.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    AWS
    Azure
    GCP
    Create your secret
    security

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Assign roles to the connector.

    Follow these steps to assign roles to the Apono connector:

    1. In your shell environment, log in to Google Cloud and enable the API.

    gcloud alpha auth login
    gcloud services enable
    
    1. Set the environment variables.

    export GCP_ORGANIZATION_ID=<GOOGLE_ORGANIZATION_ID>
    export GCP_PROJECT_ID=<GOOGLE_PROJECT_ID
    
    1. Assign roles to the connector.

    From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    Port

    Port value for the database

    By default, Apono sets this value to 5432.

    Instance ID User Override (optional)

    Overrides the instance ID for the user

    Database Name

    Name of the database to integrate

    By default, Apono sets this value to postgre.

    SSL Mode

    (Optional) Mode of Secure Sockets Layer (SSL) encryption used to secure the connection with the SQL database server

    Be sure to choose the SSL mode based on your AlloyDB primary instance :

    • require: An SSL-encrypted connection must be used.

    • allow: An SSL-encrypted or unencrypted connection is used. If an SSL encrypted connection is unavailable, the unencrypted connection is used.

    Define the Get more with Apono settings.
    Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    Apono Connector

    On-prem connection serving as a bridge between your Google Cloud SQL MySQL databases and Apono

    Minimum Required Version: 1.6.4

    Use the following steps to update an existing connector.

    Allow Connector IP Access

    Allows the Apono connector to communicate with the AlloyDB instance

    You must allow the connector IP range in the AlloyDB primary instance's IP allow list.

    API Services

    API services that must enabled:

    • AlloyDB API

    • Compute Engine API

    • Service Networking API

    See for more information.

    AlloyDB Information

    Identifiers for AlloyDB resources:

    • Primary Instance ID

    • Cluster ID

    See View instance detailsarrow-up-right to learn how to obtain these identifiers.

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    Auth Type

    Authorization type for the AlloyDB user:

    • User / Password: Apono-created local user credentials

    • IAM Authentication: Cloud IAM authentication

    Project ID

    ID of the project associated with the AlloyDB instance

    Location

    Location of the AlloyDB instance

    Primary Instance ID

    ID for the primary instance within the AlloyDB cluster

    Cluster ID

    ID for the AlloyDB cluster

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the Credentials Rotation Policy.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Google Project
    Google Organization
    Built-in Authentication
    IAM Authentication
    Create a secret
    Catalogarrow-up-right
    connector for GCP
    Associate the secret or credentials
    access flows
    gcloud auth login
    gcloud services enable cloudresourcemanager.googleapis.com
    gcloud services enable iam.googleapis.com
    export GCP_PROJECT_ID=<GOOGLE_PROJECT_ID>
    export SERVICE_ACCOUNT_NAME=<SERVICE_ACCOUNT_NAME>
    gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
        --member="
    
    Prerequisites

    Before starting this integration, create the items listed in the following table.

    Item
    Description

    Apono Connector

    On-prem serving as a bridge between your AWS Lambda functions and Apono Minimum Required Version: 1.4.1 Use the following steps to .

    Lambda Function

    Named function set up within

    When creating the Lambda function, apply the apono-connector-access: "true".

    See: .

    chevron-rightSample Lambda Functionhashtag
    function listResources(params) {
      return {
    

    listResources

    Parameter
    Description

    resources[]

    Manageable resources to display in Apono that users can be granted access to

    Each item represents a single object the integration can grant or revoke access to.

    permissions[]

    Permissions to resources that can be granted to users, such as Read and Write

    resources[]

    Parameter
    Description

    permissions[]

    Parameter
    Description

    grantAccess

    Parameter
    Description

    revokeAccess

    Parameter
    Description

    createCredentials

    Parameter
    Description

    hashtag
    Integrate an AWS Lambda Custom Integration

    AWS Lambda Custom Integration tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 8, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click AWS Lambda Custom Integration. The Connect Integration page appears.

    2. Under Discovery, click Next. The Apono connector section expands.

    3. From the dropdown menu, select a connector.

    circle-info

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating an AWS connector.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Get more with Apono section expands.

    4. Define the Get more with Apono settings.

      Setting
      Description
    5. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your AWS Lambda function.

    Item
    Description

    Apono Connector

    On-prem connection serving as a bridge between a Snowflake instance and Apono:

    OpenSSL

    OpenSSL command-line tool installed on your local machine

    is an open-source toolkit for implementing Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols.

    Snowflake account

    Snowflake account with administrative access

    Snowflake Hostname

    Unique identifier of the Snowflake instance to connect You can use either format:

    • <organization_name>-<account_name> ()

    • <organization_name>-<account_name>.privatelink (if using a )

    NOTE: If your Snowflake hostname uses <account_locator>.<cloud_region_id>

    Multi-Factor Authentication (MFA)

    MFA for the Snowflake account

    Admins must enable MFA for the Snowflake account due to Snowflake’s recent deprecation of non-MFA authentication.

    Follow these steps to enable MFA:

    1. In the Snowflake UI, go to Settings > Authentication.

    2. Click Add new authentication method.

    Public / Private Key Pair

    Key-pair authentication and rotation for Snowflake using public and private keys

    Learn how to below.

    For additional information, visit .

    hashtag
    Generate a key pair

    Follow these steps to generate a public-private key pair for authentication between the Apono connector and your Snowflake instance:

    1. In your terminal, run the following command to create a private key.

    1. When prompted, enter a passphrase for the private key.

    circle-check

    Save this passphrase securely. You will need it later when configuring the Apono integration.

    1. In your terminal, run the following command to create a public key.

    1. When prompted, enter the passphrase you created in step 2.

    Your key pair files are now ready for use during authentication.

    Key
    Value

    Private key

    rsa_key.p8

    Public key

    rsa_key.pub

    You will assign the public key to your connector user in Snowflake and add the private key (and its passphrase, if applicable) to your Apono Secret.


    hashtag
    Create a Snowflake user

    You must create a user in your Snowflake instance for the Apono connector and grant that user permissions to your instance.

    Follow these steps to create a user for the Apono connector:

    1. Create a new rolearrow-up-right called APONOADMIN.

    1. Grant the following access to the role. These permissions allow the connector to create users and roles, manage role grants, and monitor account activity, such as running SHOW commands or viewing users, roles, and sessions.

    1. Create a userarrow-up-right for the Apono connector. Use APONO_CONNECTOR or another name of your choosing for the username. Be sure to set a strong password for the user.

    1. In your Snowflake worksheet, assign the public key to the connector user by copying the key content from your rsa_key.pub file (excluding the -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY----- lines). Be sure to replace {PUBLIC_KEY} with your actual key value.

    circle-info

    This step enables key-pair authentication for the Apono connector. The private key (and passphrase, if applicable) will be stored later in your Apono Secret.

    1. Assign the APONOADMIN role to the user.

    1. (Optional) Set the default role for the user.

    1. Enable multi-factor authentication (MFA).

    2. Create a secret with the credentials from step 3 and your public-private key pair. Use the following structure when generating the secret. Be sure to replace #PRIVATE_KEY and #PASSPHRASE with actual values copied from your rsa_key.p8 file (excluding the -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY----- lines). If you used a different name for the user, replace APONO_CONNECTOR with the name you assigned to the user.

    circle-check

    You can also input the credentials directly into the Apono UI during the integration process (step 8).

    You can now integrate your Snowflake instance.

    hashtag
    Enable multi-factor authentication (MFA)

    Admins must enable MFA for a Snowflake account due to Snowflake’s recent deprecation of non-MFA authentication.

    circle-exclamation

    Once MFA is enabled in Snowflake, it cannot be disabled. Password-based authentication will no longer work after MFA is activated.

    Follow these steps to enable MFA:

    1. In the Snowflake UI, click Settings > Authentication.

    2. Click Add new authentication method.

    3. Follow the prompts to register your chosen authentication method (for example, Passkey or Authenticator).


    hashtag
    Integrate Snowflake

    Snowflake tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click Snowflake. The Connect Integration page appears.

    2. Under Discovery, select one or multiple resource types for Apono to discover in all instances of the environment.

    3. Click Next. The Apono connector section expands.

    4. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWS, Azure, GCP, Kubernetes).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. .

    circle-info

    If you select the Apono secret manager, enter the following values:

    1. Your Apono Username and Password to verify the apono-connector user. NOTE: The connector Password is a legacy field. Leave this value empty when using Snowflake’s updated version.

    2. Your Snowflake Private Key to authenticate using your Snowflake key-pair.

    3. Your Snowflake Private Key’s Passphrase, if the private key was generated with a passphrase.

    1. Click Next. The Get more with Apono section expands.

    2. Define the Get more with Apono settings.

      Setting
      Description

      Credential Rotation

      (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    3. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your Snowflake instance.

    Apono Connector

    On-prem connection serving as a bridge between an Azure instance and Apono

    Install an Azure connector using one of these approaches:

    Tag Contributor Role

    Azure role applied to the Apono connector, allowing Apono to add tags to resources

    This role is required when Azure are applied to your resources. Refer to our Azure connector documentation in the previous row to learn how to assign this role.

    To understand how Apono uses this role, read about the feature.

    Azure Management Group ID

    for enabling efficient management of access, policies, and compliance across multiple subscriptions

    Azure Primary Domain

    assigned to your tenant

    Item
    Description

    Apono Connector

    On-prem connection serving as a bridge between an Azure instance and Apono

    Install an Azure connector using one of these approaches:

    Tag Contributor Role

    Azure role applied to the Apono connector, allowing Apono to add tags to resources

    This role is required when Azure are applied to your resources. Refer to our Azure connector documentation in the previous row to learn how to assign this role.

    To understand how Apono uses this role, read about the feature.

    Azure Subscription ID

    assigned to an Azure subscription

    Azure Primary Domain

    assigned to your tenant


    hashtag
    Integrate Azure

    Azure tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click Azure. The Connect Integration page appears.

    2. Under Discovery, choose Management Group.

    3. Select one or more resources.

    4. Click Next. The Apono connector section expands.

    5. From the dropdown menu, select a connector.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating an connector.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description
    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    Now that you have completed this integration, you can create access flows that grant permission to your Azure services.

    Description

    Apono Connector

    On-prem installed on the AKS cluster that serves as a bridge between the cluster and Apono

    Apono Premium

    providing all available features and dedicated account support

    User Access Administrator Role

    that enables granting users the Azure Kubernetes Service Cluster User role. Apono does not require admin permissions to the Kubernetes environment.


    hashtag
    Integrate Azure Kubernetes Service (AKS)

    Azure Kubernetes Service tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click Azure Kubernetes Service (AKS). The Connect Integration page appears.

    2. Under Discovery, click one or more resource types and cloud services to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage Access Flows to these resources.

    1. Click Next. The Apono connector section appears.

    2. From the dropdown menu, select a connector.

    circle-info

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a Kubernetes connector.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. .

    5. Click Next. The Get more with Apono section expands.

    6. Define the Get more with Apono settings.

      Setting
      Description
    7. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your Azure Kubernetes Service cluster.

    Integrate with EKS

    Create an integration to manage access to a Kubernetes cluster on AWS

    With Elastic Kubernetes Service (EKS) on AWS, EKS simplifies the management complexities of Kubernetes.

    Through this integration, Apono helps you securely manage access to your AWS Elastic Kubernetes cluster.​


    hashtag
    Prerequisites

    ​


    hashtag
    Configure user authentication

    Authentication can be completed with an Identity and Access Management IAM user or an IAM role. To grant a user access to an EKS cluster, the IAM user or IAM role must be mapped with a specific user identifier, such as an email address.Apono supports this mapping with an IAM role through AWS SSO or SAML federation from any identity provider (IdP).​

    hashtag
    Create a new policy

    Follow these steps to create a new policy:

    1. Under Access management on the page in AWS, click Policies > Create policy. The Specify permission page appears.

    2. Click JSON.

    3. Replace the default policy with the following policy. Be sure to replace the placeholder.

    ​

    hashtag
    Create the IAM role

    Follow these steps to create the IAM role:

    1. Under Access management on the page in AWS, click Roles > Create role. The Select trusted entity page appears.

    2. Under Trusted entity type, select Custom trust policy.

    3. Under Custom trust policy, replace the default policy with one of the following trust policies. Be sure to replace the placeholders.

    Placeholder
    Description
    1. Click Next. The Add permissions page appears.

    2. Under Permissions policies, select the newly created policy.

    3. Click Next. The Name, review, and create page appears.

    circle-info

    If an Overly permission trust policy popup window appears, click Continue.

    hashtag
    Authenticate the EKS cluster

    Now that the IAM role has been created, you must authenticate the EKS cluster with the ConfigMap or EKS API.

    circle-check

    Read to learn more about editing the aws-auth ConfigMap.

    Follow these steps to authenticate the cluster:

    1. Log into the EKS cluster with a user account that has the cluster admin permission.

    Now, you can .


    hashtag
    Integrate with Elastic Kubernetes Service (EKS)

    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the tab, click Elastic Kubernetes Service (EKS). The Connect Integration page appears.

    2. Under Discovery, click one or more resource types to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section appears.

    2. From the dropdown menu, select a connector.

    circle-info

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating an on an EKS cluster.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

    circle-info

    When the Apono connector is installed on the EKS cluster, you do not need to enter values for the other optional fields.

    Setting
    Description
    1. Click Next. The Secret Store section expands.

    circle-info

    When the Apono connector is installed on the EKS cluster, you do not need to provide a secret.

    1. (Optional) .

    2. Click Next. The Get more with Apono section expands.

    3. Define the Get more with Apono settings.

      Setting
    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    Now that you have completed this integration, you can that grant permission to your Elastic Kubernetes Service cluster.


    hashtag
    Log in to EKS with Apono access details

    After a user gains access to an EKS resource, the user must authenticate with the cluster. The user must assume the .

    The following table shows two approaches to assume this role.

    Approach
    Details
    Placeholder
    Description

    MongoDB Atlas Portal

    Create an integration to manage access to a MongoDB Atlas Portal instance and its resources

    Apono’s MongoDB Atlas integration enables you to securely manage just-in-time (JIT) access to your Atlas Organizations and Projects. You can connect Apono to a or discover .


    hashtag
    Single cluster

    With the single-cluster integration, Apono connects directly to one MongoDB Atlas cluster and discovers all of its resources for streamlined access management.

    CREATE USER CONNECTOR_USERNAME WITH PASSWORD 'password';
    GRANT alloydbsuperuser TO CONNECTOR_USERNAME;
    gcloud alloydb users set-superuser CONNECTOR_USERNAME_IAM_SA_EMAIL@[PROJECT_ID].iam \
    --superuser=true \
    --cluster=CLUSTER_ID \
    --region=REGION_ID
    openssl genrsa 2048 | openssl pkcs8 -topk8 -v2 des3 -inform PEM -out rsa_key.p8
    openssl rsa -in rsa_key.p8 -pubout -out rsa_key.pub
    CREATE ROLE APONOADMIN;
    GRANT CREATE USER ON ACCOUNT TO ROLE APONOADMIN;
    GRANT CREATE ROLE ON ACCOUNT TO ROLE APONOADMIN;
    GRANT MANAGE GRANTS ON ACCOUNT TO ROLE APONOADMIN;
    GRANT MONITOR ON ACCOUNT TO ROLE APONOADMIN;
    CREATE USER APONO_CONNECTOR PASSWORD = 'password';
    ALTER USER APONO_CONNECTOR SET RSA_PUBLIC_KEY='{PUBLIC_KEY}';
    GRANT ROLE APONOADMIN TO USER APONO_CONNECTOR;
    ALTER USER APONO_CONNECTOR SET DEFAULT_ROLE = APONOADMIN;
    "username": "APONO_CONNECTOR",
    "private_key": "#PRIVATE_KEY"
    "passphrase": "#PASSPHRASE"
    must be defined.
    Integration Owner
    must also be defined.
    resource owner
    resource owners
    must be defined.
    Integration Owner
    must also be defined.
    resource owner
    resource owners

    disable: An unencrypted connection is used.

  • prefer: An SSL-encrypted connection is attempted. If the encrypted connection is unavailable, the unencrypted connection is used.

  • verify-ca: An SSL-encrypted connection must be used and a server certification verification against the provided CA certificates must pass.

  • verify-full: An SSL-encrypted connection must be used and a server certification verification against the provided CA certificates must pass. Additionally, the server hostname is checked against the certificate's names.

  • serviceAccount:
    $SERVICE_ACCOUNT_NAME
    @
    $GCP_PROJECT_ID
    .iam.gserviceaccount.com
    "
    \
    --role="roles/alloydb.admin" \
    --project $GCP_PROJECT_ID
    gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
    --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
    --role="roles/serviceusage.serviceUsageConsumer" \
    --project $GCP_PROJECT_ID
    cloudresourcemanager.googleapis.com
    gcloud services enable iam.googleapis.com
    >
    export SERVICE_ACCOUNT_NAME=<SERVICE_ACCOUNT_NAME>
    Enabling and Disabling Servicesarrow-up-right
    SSL mode configurationarrow-up-right
    gcloud organizations add-iam-policy-binding $GCP_ORGANIZATION_ID \
        --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
        --role="roles/alloydb.admin"
    
    gcloud organizations add-iam-policy-binding $GCP_ORGANIZATION_ID \
        --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
        --role="roles/serviceusage.serviceUsageConsumer"

    Kubernetes

    (
    ), you must switch to one of the accepted formats above.

    Register your chosen authentication method (for example, Passkey or Authenticator).

    Hostname

    Hostname of the Snowflake instance to connect

    Auth Type

    (Optional) Authorization type for the Snowflake user

    • User / Password: Apono-created local user credentials

    • SSO Auth: Synced user credentials from IdP integration with Snowflake

    Role

    (Optional) User role associated with the Snowflake instance

    Default: ACCOUNTADMIN

    SSO Portal URL

    (Optional) URL for the SSO portal connected to your Snowflake instance

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    AWS
    Azure
    GCP
    OpenSSLarrow-up-right
    Format 1arrow-up-right
    private connectivity URLarrow-up-right
    generate a key pair
    Snowflake’s documentationarrow-up-right
    Credentials Rotation Policy
    Format 2arrow-up-right

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

    resources: [
    {
    'id': 'resource1',
    'name': 'Resource 1',
    'type': params.resource_type,
    'metadata': {
    'key1': 'value1'
    }
    },
    {
    'id': 'resource2',
    'name': 'Resource 2',
    'type': params.resource_type,
    'metadata': {
    'key2': 'value2'
    }
    },
    {
    'id': 'resource3',
    'name': 'Resource 3',
    'type': params.resource_type,
    'metadata': {
    'key3': 'value3'
    }
    },
    ],
    permissions: [
    {
    'id': 'admin',
    'name': 'Admin'
    },
    {
    'id': 'reader',
    'name': 'Reader'
    }
    ]
    };
    }
    function grantAccess(params) {
    const username = params.username;
    const grantId = params.grant_id;
    const resources = params.resources;
    const permission = params.permission;
    const param1 = params.custom_parameters.param1
    const param2 = params.custom_parameters.param2
    console.log(param1)
    console.log(param2)
    return {
    status: 'ok'
    };
    }
    function revokeAccess(params) {
    const username = params.username;
    const grantId = params.grant_id;
    const resources = params.resources;
    const permission = params.permission;
    const param1 = params.custom_parameters.param1
    const param2 = params.custom_parameters.param2
    return {
    status: 'ok'
    };
    }
    function createCredentials(params) {
    const username = params.username;
    const grantId = params.grant_id;
    const resources = params.resources;
    const param1 = params.custom_parameters.param1
    const param2 = params.custom_parameters.param2
    return {
    status: 'ok'
    };
    }
    export const handler = async (event) => {
    const params = event.params;
    switch (event.event_type) {
    case 'create-credentials':
    return createCredentials(params);
    case 'list-resources':
    return listResources(params);
    case 'grant-access':
    return grantAccess(params);
    case 'revoke-access':
    return revokeAccess(params);
    case 'create-credentials':
    return {
    status: 'ok',
    secret: 'created-credentials-secret'
    }
    case 'reset-credentials':
    return {
    status: 'ok',
    secret: 'reset-credentials-secret'
    }
    default:
    return {
    status: 'active'
    };
    }
    };

    id

    Unique resource identifier in the source system (such as ARN) that you receive back in grantAccess or revokeAccess

    name

    Human-readable resource name to show in Apono

    type

    Resource type or service

    The value should always be the resource type (params.resource_type) that was passed in the request.

    metadata

    Tags or context associated with the resource

    Examples:

    • "environment" = "prod"

    • "region" = "us-east-1"

    id

    Integration-defined permission key you will receive back later in grantAccess

    name

    Display name for the permission shown in Apono to the requester

    username

    The Grantee’s email

    grant_id

    Apono’s unique ID for the request

    resources

    Resource IDs selected by the requester

    permission

    Permission ID chosen by the requester

    custom_parameters.param1 custom_parameters.param2

    Custom parameters defined for the Apono integration

    username

    The Grantee’s email

    grant_id

    Apono’s unique ID for the request

    resources

    Resources previously granted

    permission

    Permission to remove

    custom_parameters.param1 custom_parameters.param2

    Custom parameters defined for the Apono integration

    username

    The Grantee’s email

    grant_id

    Apono’s unique ID for the grantee

    resources

    One or more target resources for which credentials should be created

    permission

    Permission to remove

    custom_parameters.param1 custom_parameters.param2

    Custom parameters defined for the Apono integration

    Custom Parameters

    Key-value pairs to send to the lambda function For example, you can provide a lambda function with a redirect URL that is used for internal provisioning access and passed as part of the action requests.

    Region

    Region of the AWS Lambda instance

    Function Name

    Named of the AWS Lambda function

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the Credentials Rotation Policy.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    Integration Config Metadataarrow-up-right
    connection
    update an existing connector
    AWS Lambdaarrow-up-right
    tagarrow-up-right
    Sample Lambda Function

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Azure Management Group Id

    ID of a container for enabling efficient management of access, policies, and compliance across multiple subscriptions

    Azure Primary Domain

    (Optional) Initial domain assigned to your tenant

    Disable Locks

    (Optional) Allows Apono to temporarily remove locks from Azure resources in order to grant or revoke access, then automatically restore the locks after the operation

    Learn more about .

  • Click Next. The Get more with Apono section expands.

  • Define the Get more with Apono settings.

    Setting
    Description

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

  • Click Confirm.

  • Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    On the Catalogarrow-up-right tab, click Azure. The Connect Integration page appears.

  • Under Discovery, choose Subscription.

  • Select one or more resources.

  • Click Next. The Apono connector section expands.

  • From the dropdown menu, select a connector.

  • circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating an Azure connector.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Get more with Apono section expands.

    4. Define the Get more with Apono settings.

      Setting
      Description
    5. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Terraform

    Minimum Required Version: 1.3.6

    Learn how to update an existing Azure connector.

    Terraform

    Minimum Required Version: 1.3.6

    Learn how to update an existing Azure connector.

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    Catalogarrow-up-right
    Azure
    Azure CLI
    PowerShell
    resource locksarrow-up-right
    Disable Locks
    ID of a containerarrow-up-right
    Initial domainarrow-up-right
    Azure CLI
    PowerShell
    resource locksarrow-up-right
    Disable Locks
    Unique identifierarrow-up-right
    Initial domainarrow-up-right

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

    Server URL

    (Optional) URL of the Kubernetes API server used to interact with the Kubernetes cluster

    Certificate Authority

    (Optional) Certificate that ensures that the Kubernetes API server is trusted and authentic Leave this field empty if you want to connect the cluster where the connector is deployed.

    Resource Group

    (Optional) Resource group where the cluster is deployed This is the resourceGroupNamearrow-up-right.

    Cluster Name

    (Optional) Cluster name as it appears in AKS This is the resourceNamearrow-up-right.

    Subscription ID

    (Optional) Subscription ID where the cluster is deployed

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the Credentials Rotation Policy.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    connection
    Apono planarrow-up-right
    Azure rolearrow-up-right

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Placeholder
    Description

    <AWS_ACCOUNT_ID>

    AWS account ID where the EKS is hosted

  • Click Next. The Review and create page appears.

  • Enter a Policy name. This name is used to identify this policy.

  • Click Create policy.

  • For the Role name, enter apono-k8s-access.

  • For the Description, enter required for k8s access managed by Apono.

  • Click Create role.

  • Edit the aws-auth ConfigMap to include the following mapRoles entry. Be sure to replace the placeholder.

    Placeholder
    Description

    <AWS_ACCOUNT_ID>

    AWS account ID where the EKS is hosted

    Follow these steps to authenticate the cluster:

    1. Change the authentication modearrow-up-right to EKS API.

    2. Create the access entryarrow-up-right:

      • For the IAM principal, enter arn:aws:iam::<AWS_ACCOUNT_ID>:role/apono-k8s-access.

      • For the Username use apono:{{SessionName}}.

      • Choose Cluster as the access scope.

    Description

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

  • Click Confirm.

  • Click to copy the code.
  • Make any additional edits.

  • Deploy the code in your Terraform.

  • Refer to Integration Config Metadataarrow-up-right for more details about the schema definition.

    Item

    Description

    Apono Connector

    ​Connection installed on the EKS cluster that serves as a bridge between the cluster and Apono

    Apono Premium

    ​Apono planarrow-up-right providing all available features and dedicated account support

    Cluster Admin Access

    Admin access to the cluster to integrate The cluster admin access can be the built-in cluster-adminarrow-up-right role or equivalent permission level. Apono does not require admin permissions to the Kubernetes environment.

    EKS Cluster Name

    Unique name of the clusterarrow-up-right to integrate

    AWS SSO | SAML Federation

    Authentication for requester Security Assertion Markup Language (SAML) federation for authentication can be provided by providers such as Okta, Onelogin, Jumpcloud, and Ping Identity.

    <AWS_ACCOUNT_ID>

    AWS account ID where the EKS is hosted

    <SAML_PROVIDER>

    Identity provider name

    Integration Name

    Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    Server URL

    (Optional) URL of the Kubernetes API server used to interact with the Kubernetes cluster

    Certification Authority

    (Optional) Certificate that ensures that the Kubernetes API server is trusted and authentic Leave this field empty if you want to connect the cluster where the connector is deployed.

    EKS Cluster Name

    Unique name of the cluster to integrate

    AWS Role Name

    (Optional) Role defined for the connector

    Region

    (Optional) Location where the AWS Elastic Kubernetes cluster is deployed

    AWS CLI

    In the AWS CLI, run the aws sts assume-role command. Be sure to replace the placeholders.

    Config File

    Edit ~/.aws/config to contain the following profile. Be sure to replace the placeholders.

    <AWS_ACCOUNT_ID>

    AWS account ID where the EKS is hosted

    <EMAIL>

    User email listed in the IdP

    Identity and Access Management (IAM)arrow-up-right
    Identity and Access Management (IAM)arrow-up-right
    Apply the aws-auth ConfigMap to your clusterarrow-up-right
    integrate with EKS
    Catalogarrow-up-right
    Apono Connector for Kubernetes
    Associate the secret or credentials
    create access flows
    apono-k8s-access role
    Elastice Kubernetes Service (EKS) tile
    hashtag
    Prerequisites
    Item
    Description

    Apono Connector

    On-prem connection serving as a bridge between a MongoDB Atlas instance and Apono:

    Atlas Command Line Interface (Atlas CLI)

    for provisioning and managing Atlas database deployments from the terminal

    MongoDB Atlas Info

    Information for the MongoDB Atlas UI resources to be integrated:

    • Cluster name

    • Organization ID

    hashtag
    Create an API key

    You must create an API key with the Organization User role for the Apono connector.

    Follow these steps to create the API key:

    1. In the Atlas CLI, create the API key. The following command will return the public and private API keys in the response.

    circle-exclamation

    Be sure to replace <ORGANIZATION_ID> with the organization ID of the MongoDB Atlas UI to integrate.

    1. Using the keys from the previous step, create a secret for the MongoDB Atlas UI instance.

    You can now integrate your MongoDB Portal resources.

    hashtag
    Integrate MongoDB Atlas Portal

    Mongo Atlas Portal tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 11, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click Mongo Atlas Portal. The Connect Integration page appears.

    2. Under Discovery, click one or both resource types to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWS, Azure, GCP, Kubernetes).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. from step 2 of the previous section.

    5. Click Next. The Get more with Apono section expands.

    6. Define the Get more with Apono settings.

      Setting
      Description
    7. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    Now that you have completed this integration, you can create access flows that grant permission to your MongoDB Atlas UI Organizations and Projects.


    hashtag
    Multiple clusters (deep discovery)

    Apono provides enhanced integration capabilities with MongoDB Atlas Portal, permitting the discovery and management of multiple clusters simultaneously.

    To discover multiple clusters in an Organization, Apono creates a Sub Integration for every discovered cluster, with its own Databases, Documents, and Roles.

    circle-exclamation

    Deep discovery has the following limitations:

    • Deep discovery currently supports only AWS and Azure secret stores.

    • All Apono connectors must have proper network access to their MongoDB Atlas clusters.

    hashtag
    Prerequisites

    Item
    Description

    MongoDB Atlas Account

    MongoDB Atlas account with organization-level access

    Apono Connector

    On-prem connection serving as a bridge between a MongoDB Atlas instance and Apono:

    Atlas Command Line Interface (Atlas CLI)

    for provisioning and managing Atlas database deployments from the terminal

    MongoDB Atlas Info

    Information for the MongoDB Atlas UI resources to be integrated:

    • Cluster name

    • Organization ID

    hashtag
    Create an API key

    You must create an API key with the Organization Owner role for the Apono connector.

    Follow these steps to create the API key:

    1. In the Atlas CLI, create the API key. The following command will return the public and private API keys in the response.

    circle-exclamation

    Be sure to replace <ORGANIZATION_ID> with the organization ID of the MongoDB Atlas UI to integrate.

    1. Using the keys from the previous step, create a secret for the MongoDB Atlas UI instance.

    circle-exclamation

    Only AWS Secret Store and Azure Vault are supported for this integration at this time.

    hashtag
    Integrate MongoDB Atlas Portal

    Mongo Atlas Portal tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 12, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to complete the integration:

    1. On the Catalogarrow-up-right tab, click Mongo Atlas Portal integration. The Connect Integration page appears.

    2. Under Discovery, click one or both resource types to sync with Apono.

    3. Select one or several sub integrations:

      1. Under Connect Sub Integration, select Cluster and any child resource.

      2. (Optional) Select one or more additional sub integrations.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector.

    circle-check

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating a connector (AWS, Azure, GCP, Kubernetes).

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Secret Store section expands.

    4. from step 2 in the previous section

    5. Click Next. The Get more with Apono section expands.

    6. Define the Get more with Apono settings.

      Setting
      Description
    7. Click Confirm to complete the setup.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    hashtag
    Tag the MongoDB Atlas cluster

    Follow these steps to tag the cluster:

    1. In your MongoDB Atlas cluster, navigate to the Clusters or Overview page to manage your tagsarrow-up-right.

    2. For clusters in different networks or VPCs, tag each cluster with the Apono connector ID:

      1. Enter apono-connector-id for the Key.

      2. Enter the ID of the Apono connector in the cluster's network for the Value.

    circle-exclamation

    Each network or VPC hosting cluster must have a unique Apono connector.

    1. Tag each cluster for the type of Apono connection.

    chevron-rightStandard connectionhashtag

    No additional configuration needed.

    chevron-rightPrivate connectionhashtag
    1. Enter apono-connection-type for the Key.

    2. Enter Private for the Value.

    chevron-rightPrivate endpoint connectionhashtag
    1. Enter apono-connection-type for the Key.

    2. Enter PrivateEndpoint for the Value.

    3. Enter apono_private_endpoint_id for the Key.

    4. Enter the private endpoint ID for the Value.

    Now that you have completed this integration, you can create access flows that grant permission to your MongoDB Atlas UI Organizations and Projects.

    single cluster
    multiple clusters
    - rolearn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/apono-k8s-access
      username: "{{SessionNameRaw}}"
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "eks:DescribeCluster",
                "Resource": "arn:aws:eks:*:<AWS_ACCOUNT_ID>:cluster/*"
            }
        ]
    }
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "Statement1",
                "Effect": "Allow",
                "Principal": {
                    "AWS": "*"
                },
                "Action": "sts:AssumeRole",
                "Condition": {
                    "StringEqualsIgnoreCase": {
                        "sts:RoleSessionName": "${SAML:sub}"
                    },
                    "ArnLike": {
                        "aws:PrincipalArn": [
                            "arn:aws:iam::<AWS_ACCOUNT_ID>:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_*",
                            "arn:aws:iam::<AWS_ACCOUNT_ID>:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_*"
                        ]
                    }
                }
            }
        ]
    }
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:saml-provider/<SAML_PROVIDER>"
                },
                "Action": "sts:AssumeRoleWithSAML",
                "Condition": {
                    "StringEquals": {
                        "SAML:aud": "https://signin.aws.amazon.com/saml"
                    }
                }
            }
        ]
    }
    aws sts assume-role \
      --role-arn arn:aws:iam::<ACCOUNT_ID>:role/apono-k8s-access \
      --role-session-name <EMAIL> \
      --duration-seconds 3600
    [profile apono-k8s-access]
    role_arn = arn:aws:iam::<ACCOUNT_ID>:role/apono-k8s-access
    role_session_name = <EMAIL>
    source_profile = default
    atlas organizations apiKeys create --role ORG_OWNER --desc apono_connector --orgId <ORGANIZATION_ID>
    "public_key": "#PUBLIC_KEY"
    "private_key": "#PRIVATE_KEY"
    atlas organizations apiKeys create --role ORG_OWNER --desc apono_connector --orgId <ORGANIZATION_ID>
    "public_key": "#PUBLIC_KEY"
    "private_key": "#PRIVATE_KEY"
    must be defined.
    Integration Owner
    must also be defined.
    resource owner
    resource owners

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Azure Subscription Id

    (Optional) Unique identifier assigned to an Azure subscription

    Azure Primary Domain

    (Optional) Initial domain assigned to your tenant

    Disable Locks

    (Optional) Allows Apono to temporarily remove locks from Azure resources in order to grant or revoke access, then automatically restore the locks after the operation

    Learn more about Disable Locks.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Integration Config Metadataarrow-up-right
    Disable Locks
    Credentials Rotation Policy
    Periodic User Cleanup & Deletion
    must be defined.
    Integration Owner
    must also be defined.
    resource owner
    resource owners
    must be defined.
    Integration Owner
    must also be defined.
    Credentials Rotation Policy
    Periodic User Cleanup & Deletion
    resource owner
    resource owners

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

    Kubernetes

    Kubernetes

    Organization ID

    ID of the organization of the MongoDB Atlas UI instance to connect

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the Credentials Rotation Policy.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    Organization ID

    ID of the organization of the MongoDB Atlas UI instance to connect

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the Credentials Rotation Policy.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    Associate the secret or credentials
    Integration Config Metadataarrow-up-right
    AWS
    Azure
    GCP
    Command line interfacearrow-up-right
    AWS
    Azure
    GCP
    Command line interfacearrow-up-right
    Directory section

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Integrate an AWS account or organization

    Learn how to complete an AWS integration in the Apono UI

    Apono offers AWS users a simple way to centralize cloud management through our platform. Through a single integration, you can manage multiple AWS services across various accounts and organizations.


    hashtag
    Integrate an AWS account

    AWS RDS MySQL

    hashtag
    In this article

    Amazon RDS for MySQL is an open-source relational database management service in the cloud. Through AWS RDS MySQL integration, you will be able to integrate with AWS RDS MySQL:

    Apono Connector for AWS

    How to install a Connector on an AWS account to integrate an AWS Account or Organization with Apono

    hashtag
    Overview

    To integrate with AWS and start managing JIT access to AWS cloud resources, you must first install a connector in your AWS environment.

    The connector should match the level of access management you want to achieve with Apono: on a single account or on the entire organization.

    must be defined.
    Integration Owner
    must also be defined.
    must be defined.
    Integration Owner
    must also be defined.
    resource owner
    resource owners
    resource owner
    resource owners
    hashtag
    Prerequisites
    • Apono connector installed in your AWS account

    • To sync and manage access to EC2 servers, make sure you add the AmazonSSMManagedInstanceCore policy to the connector's IAM role

    hashtag
    Integration

    AWS tile
    circle-check

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to integrate Apono with your AWS account:

    1. On the Catalogarrow-up-right tab, click AWS. The Connect Integrations Group page appears.

    2. Under Discovery, click Amazon Account.

    3. Click one or more resource types to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage Access Flows to these resources.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-info

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating an Apono connector.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Get more with Apono section expands.

    4. Define the Get more with Apono settings.

      Setting
      Description
    5. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    After connecting your AWS account to Apono, you will be redirected to the Connected tab to view your integrations. The new AWS integration will initialize once it completes its first data fetch. Upon completion, the integration will be marked Active.

    Now that you have completed this integration, you can create access flows that grant permission to AWS IAM resources, such as AWS Roles.


    hashtag
    Integrate an AWS organization

    hashtag
    Prerequisites

    • Apono connector installed in your AWS management account OR a connector with delegate permissions

    • To sync and manage access to EC2 servers, make sure you add the AmazonSSMManagedInstanceCore policy to the connector's IAM role

    hashtag
    Integration

    AWS tile
    circle-info

    You can also use the steps below to integrate with Apono using Terraform.

    In step 10, instead of clicking Confirm, follow the Are you integrating with Apono using Terraform? guidance.

    Follow these steps to integrate Apono with your AWS organization:

    1. On the Catalogarrow-up-right tab, click AWS. The Connect Integrations Group page appears.

    2. Under Discovery, click Amazon Organization.

    3. Click one or more resource types to sync with Apono.

    circle-info

    Apono automatically discovers and syncs all the instances in the environment. After syncing, you can manage access flows to these resources.

    1. Select the Permission Boundary resource to allow Apono to temporarily restrict overprivileged access.

    circle-check

    To learn more about how to manage overprivileged access, read about Access Discovery.

    1. Click Next. The Apono connector section expands.

    2. From the dropdown menu, select a connector. Choosing a connector links Apono to all the services available on the account where the connector is located.

    circle-info

    If the desired connector is not listed, click + Add new connector and follow the instructions for creating an Apono connector.

    1. Click Next. The Integration Config section expands.

    2. Define the Integration Config settings.

      Setting
      Description

      Integration Name

      Unique, alphanumeric, user-friendly name used to identify this integration when constructing an access flow

    3. Click Next. The Get more with Apono section expands.

    4. Define the Get more with Apono settings.

      Setting
      Description
    5. Click Confirm.

    chevron-right💡Are you integrating with Apono using Terraform?hashtag

    If you want to integrate with Apono using Terraform, follow these steps instead of clicking Confirm:

    1. At the top of the screen, click View as Code. A modal appears with the completed Terraform configuration code.

    2. Click to copy the code.

    3. Make any additional edits.

    4. Deploy the code in your Terraform.

    Refer to for more details about the schema definition.

    After connecting your AWS account to Apono, you will be redirected to the Connected tab to view your integrations. The new AWS integration will initialize once it completes its first data fetch. Upon completion, the integration will be marked Active.

    hashtag
    Enable multi-region resource discovery in Apono

    Apono leverages AWS Resource Explorer for multi-region scans for your AWS Organization integration. Apono uses this organization-level configuration to automatically deploy local indexes and aggregate them into a single searchable view.

    This configuration provides:

    • A centralized aggregator index for organization-wide search

    • Automated creation and maintenance of local indexes

    • Consistent visibility across teams, regions, and environments

    • Less manual setup and fewer cross-account visibility gaps

    Prerequisites

    Item
    Description

    AWS Organization

    An with Apono.

    All organizational units (OUs) or accounts you plan to include as part of the target must be structured within the AWS organization.

    IAM user or role in the management account

    A user or role used to run Quick Setup in the management account.

    This user or role must be able to complete these tasks:

    • Enable trusted access in AWS Organizations

    • Configure Resource Explorer

    Service Control Policy (SCP)

    SCPs must not deny CloudFormation in any target account or region:

    • SCPs must not explicitly deny:

      • cloudformation:CreateStack

    Enable trusted access for Resource Explorer

    Follow these steps to enable trusted access:

    1. From theyour Management account, open AWS Resource Explorer.

    2. From the navigation, click Settings. The Settings page appears.

    3. In the multi-account/organization section, follow the prompt to Enable trusted access.

    circle-check

    You can also enable trusted access from AWS Organizations.

    Follow these steps:

    1. From your Management account, open AWS Organizations.

    2. From the navigation, click Services. The Services page appears.

    3. Click AWS Resource Explorer. The AWS Resource Explorer page opens.

    4. If Trusted access is disabled, click Enable trusted access. The Enable trusted access for AWS Resource Explorer pop-up window appears.

    5. Click Show the option to enable trusted access for AWS Resource Explorer without performing additional setup tasks.

    6. Type enable in the text field.

    7. Click Enable trusted access.

    Configure the organization deployment

    Follow these steps to configure the organization deployment:

    1. Open the Quick Setup from the Systems manager or Resource Explorer.

    chevron-rightSystems Managerhashtag
    1. Open AWS Systems Manager.

    2. From the navigation, click Change Management Tools > Quick Setup. The AWS Quick Setup page opens.

    3. Click Get started. The Library tab opens.

    4. On the Resource Explorer card, click Create. The Configure Resource Explorer for your Organization page opens.

    chevron-rightResource Explorerhashtag
    1. Open AWS Resource Explorer.

    2. From the navigation, click Settings. The Settings page opens.

    3. Under Multi-account search in Resource Explorer, click Create configuration on Quick Setup. The Configure Resource Explorer for your Organization page opens.

    1. Select the Aggregator Index Region. This region becomes the central location for organization-wide search.

    2. Under Targets, select the accounts that include the resources you want discovered:

      • Entire Organization: (Recommended) Enables complete visibility

      • Specific OUs: Enables scoping deployment

    3. From the regions selector, choose all regions where Resource Explorer should create indexes.

    circle-info

    If a regions selector is not present, all supported regions for the selected targets may be implicitly included.

    1. Under Summary, review the aggregator region, targets, and regions.

    2. Select Create. The Quick Setup will deploy the following:

      • Local indexes in each selected region or account

      • An aggregator index in the Aggregator Region

      • Default views for centralized search

    Verify the deployment

    After the deployment has completed, follow these steps to verify the deployment:

    1. From the Management account, open AWS Resource Explorer.

    2. From the navigation, click Settings. The Settings page opens.

    3. Under Indexes, locate the region set as the aggregator index during the Quick Setup. The region should be denoted as Aggregator.

    4. Spot check a member account:

      1. Log in as or assume the role of a sample member account.

      2. Open AWS Resource Explorer in one region that should have an index to ensure an index exists and is Active.

      3. Open AWS Resource Explorer in one region that should not have an index to confirm an index does not exist.

    circle-info

    If some regions or accounts are missing the index, read The index is missing in some regions or accounts.

    Troubleshoot Quick Setup

    chevron-rightQuick Setup fails in some regions.hashtag

    Symptoms

    • Quick Setup shows Failed for some configs.

    • Error text mentions cloudformation:CreateStack (or similar) and an explicit denial in a service control policy.

    Likely Cause

    A Service Control Policy denies CloudFormation in some regions, often with aws:RequestedRegion. This results in regions that are allowed by SCP to be successful. And all other regions fail.

    Solution

    Follow these steps:

    1. From the Admin account, open AWS Organizations.

    2. From the navigation, click Policies. The Policies page opens.

    3. Under Service control policies, examine SCPs attached to the affected organizational unit or account for "Effect": "Deny" statements that mention cloudformation:*

    chevron-rightThe index is missing in some regions or accounts.hashtag

    Symptoms

    • Some accounts or regions have no index.

    • Quick Setup shows partial success.

    Possible Causes

    • The region was not included in the Quick Setup region selection.

    • The account or organizational unit was not part of the Quick Setup target scope.

    • CloudFormation has been denied by SCP in that region.

    Solution

    Follow these steps:

    1. Review the Targets and Regions (if applicable) selected when you .

    2. for the relevant accounts or regions.

    circle-check

    If CloudFormation must stay blocked, you can manually create indexes.

    chevron-rightThe aggregator index is missing from the Management account.hashtag

    Symptoms

    • In the Management account, in the chosen Aggregator Region:

      • The index exists but is not marked as Aggregator.

      • The index does not exist.

    • The organization-wide view does not show everything.

    Possible Causes

    • The Management account is not in one of the Quick Setup targets, such as the selected organizational unit.

    • AWS created aggregator indexes only in member accounts based on your config.

    • The index was manually created as Local, not Aggregator.

    Solution

    Follow these steps:

    1. In the Management account, in the Aggregator Region, ensure an index exists.

    2. In the console, change the index to Aggregator.

    circle-check

    If the index cannot be changed to Aggregator, manually recreate the index as an Aggregator.

    1. Create the organization-wide view in the specific account or region.

    chevron-rightThe view that was created in Resource Explorer is empty.hashtag

    After enabling Resource Explorer, it can take up to 36 hours for all supported resources across all regions to be fully indexed. Read more herearrow-up-right.

    Now that you have completed this integration, you can create access flows that grant permission to AWS IAM resources, such as AWS Roles.


    hashtag
    Troubleshooting

    Please refer to our troubleshooting guidearrow-up-right if you encounter errors while integrating.

    Database
  • Table

  • Role

  • hashtag
    Prerequisites

    • If you already have AWS Apono connector:

      • Make sure the connector's minimum version is 1.5.3.

    • If you still don't have AWS Apono connector:

    hashtag
    Create AWS RDS MySQL Integration

    hashtag
    Generate Credentials

    Create user and grant permissions:

    circle-exclamation

    You can use only one authentication option on the RDS instance at a time.

    circle-info

    (MySQL 8.0+) Grant the service account the authority to manage other roles. This enables Apono to create, alter, and drop roles. However, this role does not inherently grant specific database access permissions.

    chevron-rightPassword Authenticationhashtag

    With password authentication, your database performs all administration of user accounts. You create users with SQL statements such as CREATE USER, with the appropriate clause required by the DB engine for specifying passwords.

    1. Get your AWS RDS DB details.

    1. Connect RDS MySQL.

    1. Create a username for the Apono connector. The username is arbitrary and can be set according to your preference.

    2. Replace USER_NAME and PASSWORD with your desired credentials.

    1. Grant the necessary permissions to the user.

    SHOW DATABASES Allows the user to view all databases in the RDS instance. CREATE USER Grants the ability to create new users. UPDATE Permits updates in the MySQL system database, including user privileges. PROCESS Allows viewing the server's process list, including all executing queries.

    1. (MySQL 8.0 and above) Grant the user the authority to manage roles by giving them the ROLE_ADMIN privilege. Starting with MySQL 8.0, the ROLE_ADMIN privilege is required to create roles, assign permissions to roles, and grant or revoke roles to or from users. This privilege does not inherently grant any specific database access permissions.

    chevron-rightIAM Authenticationhashtag

    You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token.

    1. Get your AWS RDS DB details.

    chevron-rightPassword Authenticationhashtag

    With password authentication, your database performs all administration of user accounts. You create users with SQL statements such as CREATE USER, with the appropriate clause required by the DB engine for specifying passwords.

    1. Get your AWS RDS DB details.

    chevron-rightPassword Authenticationhashtag

    With password authentication, your database performs all administration of user accounts. You create users with SQL statements such as CREATE USER, with the appropriate clause required by the DB engine for specifying passwords.

    1. Sign in to the AWS Management Console and open the Amazon RDS console , and choose your DB instance.

    hashtag
    Create Integration in Apono

    1. In the Apono admin consolearrow-up-right, go to the Integrations page and click the Add Integration button in the top-left side, or press on the Catalog blade.

    2. In the Catalog page search for and select AWS RDS MySQL.

    3. In Discovery step, select one or multiple AWS RDS MySQL resource types for Apono to discover.

    4. In Apono connector step, select the connector with the required permissions to be used with your AWS RDS MySQL.

    5. In Integration config step, provide the following information about your AWS RDS MySQL:

    Variable
    Value
    Required

    Integration Name

    The integration name.

    Yes

    Auth Type

    The authentication method for connecting to an AWS RDS instance, with options for password (username and password) or iam (IAM-based authentication).

    Yes

    Region

    AWS region where the RDS instance is located.

    Yes

    Instance ID

    The unique identifier of the AWS RDS instance.

    Yes

    1. In Secret Storearrow-up-right step, provide the connector credentials using one of the following secret store options:

      • AWSarrow-up-right

      • KUBERNETESarrow-up-right

    circle-info

    When using IAM authentication, a secret does not need to be created. The service account and its permissions are managed through IAM roles and policies. The service account is used to authenticate the MySQL instance instead of a secret.

    For the AWS RDS MySQL integration, use the following secret format: username:<The database username> password:<The user password>

    1. (Optional) In Get more with Apono step, you can set up the following:

    Setting
    Description

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the .

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about .

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

    hashtag
    Next Steps

    Prerequisites
    Create AWS RDS MySQL Integration
    Next Steps
    To manage access to a single AWS account, install a connector on that account. Follow this guide.
  • To manage access to all the accounts in the AWS organization:

    • Install a connector on the management account. Follow this guide. OR

    • Install a connector in any account with ECS or EKS and give it assumable permissions to the management account. Follow this guide.

  • circle-info

    What's a connector? What makes it so secure?

    The Apono Connector is an on-prem connection that can be used to connect resources to Apono and separate the Apono web app from the environment for maximal security.

    Read more about the recommended AWS Installation Architecture.

    First, decide if you want to integrate Apono with a specific AWS Account or with the entire Organization (containing multiple Accounts).

    Follow the guides below depending on your selection.


    hashtag
    AWS Account connector

    hashtag
    Prerequisites

    • Administrator permissions to the AWS account you want to connect.

    • VPC with outbound connectivity

    hashtag
    1. In Apono

    1. Login to the Apono platform

    2. Go to the Apono Integrations page

    3. From the Catalog, pick AWS

    4. Pick Account

    5. Install a new connector in AWS. Read more .

    6. Choose the desired deployment method

    hashtag
    2. In CloudFormation

    1. Choose Cloudformation

    2. Click "Open Cloud Formation"

    3. Sign in to your AWS user and click Next

    1. Within the AWS create stack page, scroll down

    2. Make sure you pick at least one Subnet and one VPC from the dropdown lists

    3. Tick the acknowledge box and then select Create Stack

    Apono integrates with AWS natively, using AWS CloudFormation as a standard mechanism to deploy all required configurations including a Cross Account Role with Read permission, a SNS notification message, and the Apono Connector that runs using an AWS ECS on Fargate.


    hashtag
    AWS Organization connector on the Management account

    hashtag
    Prerequisites

    • Administrator permissions to the AWS management account in the Organization.

    • VPC with outbound connectivity.

    hashtag
    1. In Apono

    1. Login to the Apono platform

    2. Go to the Apono Integrations page

    3. From the Catalog, pick AWS

    4. Pick Organization

    5. Choose Cloudformation

    hashtag
    2. In CloudFormation

    1. Click "Open Cloud Formation"

    2. Sign in to your AWS user and click Next

    1. The new stack should be installed in the management account (which manages the organization's Identity Center)

    2. Within the AWS create stack page, scroll down

    3. Make sure you pick at least one Subnet and one VPC from the dropdown lists

    4. Tick the acknowledge box and then select Create Stack

    Apono integrates with AWS natively, using AWS CloudFormation as a standard mechanism to deploy all required configurations including a Cross Account Role with Read permission, a SNS notification message, and the Apono Connector that runs using an AWS ECS on Fargate.

    Acknowledge and Create Stack
    1. Verify that "trusted access" is activated for your organization. Read more herearrow-up-right.


    hashtag
    Connector with IAM role permissions for AWS Organization management

    You can install a connector with assumable permissions to the AWS Management account using either AWS Elastic Container Service (ECS) or Elastic Kubernetes Service (EKS) in CloudFormation.

    Once installed, the connector syncs data from cloud applications and enables you to manage access permissions through access flows within Amazon ECS or EKS.

    hashtag
    Prerequisites

    Item
    Description

    AdminstratorAccess Policy

    AWS role with policy providing full access to AWS services and resources

    Full AWS access is not granted to Apono.

    OrganizationID

    Unique identifier of the AWS Organization that will be connected via the integration (ex. o-k012345a67)

    Follow these steps to find your OrganizationID:

    1. In your AWS console settings, click Organization. The AWS accounts page appears.

    2. In the left navigation, click Settings. The Settings page appears.

    OrganizationUnitID

    Root ID for the AWS Organization Unit that will be connected via the integration (ex. r-1a2b)

    Follow these steps to obtain your OrganizationUnitID:

    1. In your IAM Identity Center, expand Multi-account permissions.

    2. Click AWS accounts. The AWS accounts page appears.

    VPC

    Virtual Private Cloud (VPC) with outbound connectivity

    Subnet

    One or more Subnet IDs within the selected VPC where the connector resources will run

    Permission

    Full access (Manage IAM) permissions to enable the connector to create and manage the required IAM resources during deployment

    hashtag
    Install the connector

    Installing the connector in Apono

    Follow these steps to enable the connector to manage the entire AWS Organization:

    1. On the Connectorsarrow-up-right page, click Install Connector. The Install Connector page appears.

    2. Under Select connector installation strategy, click Cloud installation > AWS. The permission options appear.

    3. Click No, Just Install the Connector. The installation methods appear.

    triangle-exclamation

    Do not select Install and Connect AWS Account. This option creates IAM roles in the member account that will conflict with the CloudFormation roles deployed in the Management account, causing the installation to fail.

    1. Click the CloudFormation (ECS) or CloudFormation (EKS) installation method.

    circle-check

    You can also install the connector using Terraform.

    1. Finish installing the connectorarrow-up-right in CloudFormation for your AWS Account.

    2. Once the connector is installed, copy the following values from CloudFormation.

    Key
    Location

    AponoConnectorRoleArn

    On the Outputs tab, copy the Value for the AponoConnectorRoleArn.

    AponoConnectorId

    On the Parameters tab, copy the Value for the AponoConnectorId key.

    1. Open CloudFormationarrow-up-right with your Management account. The Quick create stack page appears.

    2. Under Parameters, enter values for the following fields:

      1. AponoConnectorId: Value copied in step 6.

      2. ConnectorRoleArn: Value copied in step 6.

      3. OrganizationId: Organization ID copied during the .

      4. OrganizationUnitId: Root ID copied during the .

      5. From the Permissions dropdown menu, select Full-Access (Manage IAM).

    3. Under Capabilities, select I acknowledge that AWS CloudFormation might create IAM resources with custom names.

    4. Click Create stack.

    5. (Optional) On the Outputs tab, copy the Value for the ManagementAccountRoleArnOutput.

    circle-info

    When integrating an AWS Organization, you can paste the ManagementAccountRoleArnOutput value in the Integration Config settings to use the connector.

    1. On the Connectorsarrow-up-right page, verify that the connector has been deployed.

    2. (Optional) Follow the steps to integrate an AWS organization.

    Apono Integration Secret

    Many integrations require granting Apono connector credentials to allow it to authenticate and connect. You can create secrets in different secrets managers (e.g. AWS, GCP, Azure) and specify them in the integration secret store. This allows the connector to safely and securely retrieve its credentials in order to connect to the desired integration resources.

    Apono supports the following secret managers:

    hashtag
    Apono Secret

    Use Apono to store your connector credentials for the desired integration resources.

    triangle-exclamation

    Using the Apono secret store option is not recommended for production environments.

    We suggest creating a secret in one of the supported cloud providers secret manager or in a Kubernetes secret. Storing secrets in a secret manager enables Apono to sync and provision cloud resources without the need to store credentials for a specific environment in Apono.


    Set Credentials in Apono Secret

    From your Integration configuration page expand Secret Store, click on the APONO tab and enter the required credentials information for the integration.

    hashtag
    Kubernetes Secret

    Use Kubernetes secret to store your connector credentials for the desired integration resources.

    Prerequisites

    hashtag
    AWS Secret

    Use AWS Secret Manager to store your connector credentials for the desired integration resources.

    Prerequisites

    • AWS role or user with SecretsManagerReadWrite

    hashtag
    Azure Secret

    Use Azure Key Vault to store your connector credentials for the desired integration resources.

    Prerequisites

    Azure user with the following permission on the Key Vault:

    hashtag
    GCP Secret

    Use GCP Secret Manager to store your connector credentials for the desired integration resources.

    Prerequisites

    • GCP user with

    hashtag
    HashiCorp Secret

    Use HashiCorp Vault to store your connector credentials for the desired integration resources.

    Prerequisites

    • Required Apono connector version: 1.6.6

    aws rds describe-db-instances \
      --filters "Name=engine,Values=mysql" \
      --query "*[].[Endpoint.Address,Endpoint.Port]"
    
    mysql -h [Endpoint.Address] -P [Endpoint.Port] -u USER_NAME -p

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several :

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an

    or specific Cloudformation actions.
  • Fix the issues through one of the following options:

    1. Add the required regions to the allowlist in aws:RequestedRegion.

    2. Exclude CloudFormation from the deny list. For example, add cloudformation:* to NotAction.

    3. Temporarily relax or detach the SCP, re-run Quick Setup, then restore the SCP.

  • Use Systems Manager Quick Setup
  • Use AWS Resource Access Manager (RAM)

  • View CloudFormation, SSM, and Resource Explorer status

  • Option A

    Use a role or user with the AWS-managed AdministratorAccess policy in the Management account to prevent hidden blocking conditions.

    Option B

    Create a role in the Management account (such as ResourceExplorerAdmin) with a custom managed policy similar to the following example.

    cloudformation:UpdateStack
  • cloudformation:*

  • Region-restriction SCPs (aws:RequestedRegion) must adhere to one of the following:

    • Include all required regions in the allowlist.

    • Explicitly exempt CloudFormation from an explicit denial by adding cloudformation:* to NotAction.

  • IMPORTANT: Failure to adhere to these SCP requirements will prevent Quick Setup from successfully deploying in regions where the SCP has denied CloudFormation.

    Region

    Region in which the organization runs

    AWS Profile Name

    (Optional) Name of the AWS profile By default, Apono sets this value to apono.

    Credential Rotation

    (Optional) Number of days after which the database credentials must be rotated Learn more about the Credentials Rotation Policy.

    User cleanup after access is revoked (in days)

    (Optional) Defines the number of days after access has been revoked that the user should be deleted

    Learn more about Periodic User Cleanup & Deletion.

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    Region

    Region in which the organization runs

    AWS SSO Region

    Region for which your single sign-on is configured

    SSO Portal

    Single sign-on URLarrow-up-right This is required for Apono to generate a sign-in link for end users to use their granted access.

    Management Account Role ARN

    (Optional) ARN (step 5) of the role to assume in the management account

    Exclude Organization Unit IDs

    (Optional) Comma-separated list of organizational unit IDs to exclude Example: ou-aaa1-1111,ou-bbb2-2222

    Exclude Account IDs

    (Optional) Comma-separated list of account IDs to exclude Example: 7665544332211,7665544332222,766554433333333

    Custom Access Details

    (Optional) Instructions explaining how to access this integration's resources Upon accessing an integration, a message with these instructions will be displayed to end users in the User Portal. The message may include up to 400 characters. To view the message as it appears to end users, click Preview.

    Integration Owner

    (Optional) Fallback approver if no resource owner is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner must be defined.

    Resource Owner

    (Optional) Group or role responsible for managing access approvals or rejections for the resource Follow these steps to define one or several resource owners:

    1. Enter a Key name. This value is the name of the tag created in your cloud environment.

    2. From the Attribute dropdown menu, select an attribute under the IdP platform to which the key name is associated. Apono will use the value associated with the key (tag) to identify the resource owner. When you update the membership of the group or role in your IdP platform, this change is also reflected in Apono.

    NOTE: When this setting is defined, an Integration Owner must also be defined.

    Integration Config Metadataarrow-up-right
    Integration Config Metadataarrow-up-right
    configured the organization deployment
    Check the SCP
    AWS organization must be integrated

    (Optional) Fallback approver if no is found Follow these steps to define one or several integration owners:

    1. From the Attribute dropdown menu, select User or Group under the relevant identity provider (IdP) platform.

    2. From the Value dropdown menu, select one or multiple users or groups.

    NOTE: When Resource Owner is defined, an Integration Owner

    Enable IAM database authentication.
    1. Connect RDS MySQL.

    1. Create a username for the Apono connector. The username is arbitrary and can be set according to your preference.

    2. Replace USER_NAME with your desired credentials.

    1. Grant the necessary permissions to the user.

    SHOW DATABASES Allows the user to view all databases in the RDS instance. CREATE USER Grants the ability to create new users. UPDATE Permits updates in the MySQL system database, including user privileges. PROCESS Allows viewing the server's process list, including all executing queries.

    1. Add this policy to the connector role:

    1. To allow a user or role to connect to your DB instance, create the following IAM policy and attach it to your identity center permissions set or role.

    Connect RDS MySQL.

    1. Create a username for the Apono connector. The username is arbitrary and can be set according to your preference.

    2. Replace USER_NAME and PASSWORD with your desired credentials.

    1. Grant the necessary permissions to the user.

    SHOW DATABASES Allows the user to view all databases in the RDS instance. CREATE USER Grants the ability to create new users. UPDATE Permits updates in the MySQL system database, including user privileges. PROCESS Allows viewing the server's process list, including all executing queries.

    1. (MySQL 8.0 and above) Grant the user the authority to manage roles by giving them the ROLE_ADMIN privilege. Starting with MySQL 8.0, the ROLE_ADMIN privilege is required to create roles, assign permissions to roles, and grant or revoke roles to or from users. This privilege does not inherently grant any specific database access permissions.

    chevron-rightIAM Authenticationhashtag

    You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token.

    1. Get your AWS RDS DB details.

    1. Enable IAM database authentication.

    1. Connect RDS MySQL.

    1. Create a username for the Apono connector. The username is arbitrary and can be set according to your preference.

    2. Replace USER_NAME with your desired credentials.

    1. Grant the necessary permissions to the user.

    SHOW DATABASES Allows the user to view all databases in the RDS instance. CREATE USER Grants the ability to create new users. UPDATE Permits updates in the MySQL system database, including user privileges. PROCESS Allows viewing the server's process list, including all executing queries.

    1. Add this policy to the connector role:

    1. To allow a user or role to connect to your DB instance, create the following IAM policy and attach it to your identity center permissions set or role.

    Copy the following details:

    • Endpoint: The DNS name of the DB instance.

    • Port: The port number on which the DB instance accepts connections.

  • Connect to the DB instance using your SQL client using the copied details.

  • Create a user for the Apono connector. Replace USER_NAME and PASSWORD with your desired credentials.

    1. Grant the necessary permissions to the user.

    SHOW DATABASES Allows the user to view all databases in the RDS instance. CREATE USER Grants the ability to create new users. UPDATE Permits updates in the MySQL system database, including user privileges. PROCESS Allows viewing the server's process list, including all executing queries.

    1. (MySQL 8.0 and above) Grant the user the authority to manage roles by giving them the ROLE_ADMIN privilege. Starting with MySQL 8.0, the ROLE_ADMIN privilege is required to create roles, assign permissions to roles, and grant or revoke roles to or from users. This privilege does not inherently grant any specific database access permissions.

    chevron-rightIAM Authenticationhashtag

    You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token.

    1. Enable IAM database authentication

      1. Open the .

      2. In the navigation pane, choose Databases.

      3. Choose the DB instance that you want to modify.

      4. Make sure that the DB instance is compatible with IAM authentication. Check the compatibility requirements in Region and version availability.

      5. Choose Modify.

      6. In the Database authentication section, choose Password and IAM database authentication to enable IAM database authentication.

      7. Choose Password authentication or Password and Kerberos authentication to disable IAM authentication.

      8. Choose Continue.

      9. To apply the changes immediately, choose Immediately in the Scheduling of modifications section.

      10. Choose Modify DB instance.

    2. Copy the following RDS SQL details:

    • Endpoint: The DNS name of the DB instance.

    • Port: The port number on which the DB instance accepts connections.

    1. Connect to the DB instance using your SQL client using the copied details.

    2. Create a username for the Apono connector. The username is arbitrary and can be set according to your preference.

    3. Replace USER_NAME with your desired credentials.

    1. Grant the necessary permissions to the user.

    SHOW DATABASES Allows the user to view all databases in the RDS instance. CREATE USER Grants the ability to create new users. UPDATE Permits updates in the MySQL system database, including user privileges. PROCESS Allows viewing the server's process list, including all executing queries.

    1. Add this policy to the connector role:

    1. To allow a user or role to connect to your DB instance, create the following IAM policy and attach it to your identity center permissions set or role.

    must be defined.
    Integration Owner
    must also be defined.

    Credentials rotation period (in days)

    i.e.: 90

    No

    User cleanup after access is revoked (in days)

    i.e.: 90

    No

    Create Integration Access Flowarrow-up-right

    Install AWS Account connector on ECS using Terraform.arrow-up-right
    Install AWS Account connector on ECS using CloudFormation.arrow-up-right
    Install AWS Organization connector on ECS using Terraform.arrow-up-right
    Install AWS Organization connector on ECS using CloudFormation.arrow-up-right
    Install AWS Organization connector on EKS using Terraform.arrow-up-right
    AWS command-linearrow-up-right
    MySQL command-linearrow-up-right
    Amazon RDS consolearrow-up-right
    APONOarrow-up-right
    HASHICORParrow-up-right
    Credentials Rotation Policy
    Periodic User Cleanup & Deletion
    resource owner
    resource owners
    installed in your Kubernetes cluster
  • Kubectlarrow-up-right command-line interface


  • Create a secret

    Run the following commands to create a secret from the Kubectl CLI.

    1. Create the secret.

    1. Label the secret with apono-connector-read:true

    1. Give the Apono connector permissions to the secret:

    Prerequisites

    • Apono connector installed in your Kubernetes cluster

    • Terraformarrow-up-right command-line interface


    Create a secret

    Use the following configuration to create a secret from the Terraform CLI.


    Configure Integration to Use Kubernetes Secret

    From your Integration configuration page expand Secret Store, click on the Kubernetes tab and enter the required secret namespace and name.

    attached policy
  • AWSarrow-up-right command-line interface


  • Create a secret

    Run the following commands to create a secret from the AWS CLI.

    Prerequisite

    • AWS role or user with SecretsManagerReadWrite attached policy.


    Create a secret

    Follow these steps to create a secret:

    1. From the , click Store a new secret. The Choose secret type page appears.

    2. Select Other type of secret.

    3. Under Key/value pairs, enter your secret through one of the following approaches:

      • On the Key/value tab, enter your information in the two fields: key in the first field, value in the second field.

      • On the Plaintext tab, enter your secret in JSON key/value pairs.

    4. Click Next. The Configure secret page appears.

    5. Under Tags, click Add.

    6. In the Key field, enter apono-connector-read.

    7. In the Value field, enter true.

    Prerequisites

    • AWS role or user with SecretsManagerReadWrite attached policy

    • Terraformarrow-up-right command-line interface


    Create a secret

    Use the following configuration to create a secret from the Terraform CLI.


    Configure Integration to Use The AWS Secret

    From your Integration configuration page expand Secret Store, click on the AWS tab and enter the required secret region and secret name.

    For Azure Key Vault that configured with 'Azure role-based access control' permission model grant the user the Key Vault Secrets Officer role.
  • For Azure Key Vault that configured with 'access policy' permission model create and grant the user an access policy with the following secret permissions (Secret Management Operations):

    • Get

    • Set

  • Azurearrow-up-right command-line interface


  • Create a secret

    Run the following commands to create a secret from the Azure CLI.

    Prerequisites

    Azure user with the following permission on the Key Vault:

    • For Azure Key Vault that configured with 'Azure role-based access control' permission model grant the user the Key Vault Secrets Officer role.

    • For Azure Key Vault that configured with 'access policy' permission model create and grant the user an access policy with the following secret permissions (Secret Management Operations):

      • Get

      • Set


    Create a secret

    Follow these steps to create a secret:

    1. Navigate to your key vault in the Azure portal.

    2. On the Key Vault left-hand sidebar, select Objects then select Secrets.

    3. Select + Generate/Import.

    Prerequisites

    Azure user with the following permission on the Key Vault:

    • For Azure Key Vault that configured with 'Azure role-based access control' permission model grant the user the Key Vault Secrets Officer role.

    • For Azure Key Vault that configured with 'access policy' permission model create and grant the user an access policy with the following secret permissions (Secret Management Operations):

      • Get

      • Set

    • command-line interface


    Create a secret

    Use the following configuration to create a secret from the Terraform CLI.


    Configure Integration to Use The Azure Secret

    From your Integration configuration page expand Secret Store, click on the Azure tab and enter the required secret key vault URL and secret nam

    Secret Manager Admin
    (roles/secretmanager.admin)
    role.
  • Secret Manager APIarrow-up-right (enabled once per project)

  • gcloudarrow-up-right command-line interface


  • Create a secret

    Run the following commands to create a secret from the gcloud CLI.

    Prerequisites

    • GCP user with Secret Manager Admin(roles/secretmanager.admin) role.

    • Secret Manager APIarrow-up-right (enabled once per project)


    Create a secret

    Follow these steps to create a secret:

    1. in the Google Cloud console.

    2. On the Secret Manager page, click Create Secret.

    3. On the Create secret page, under Name, enter my-secret.

    Prerequisites

    • GCP user with Secret Manager Admin(roles/secretmanager.admin) role.

    • Secret Manager APIarrow-up-right (enabled once per project)

    • command-line interface


    Create a secret

    Use the following configuration to create a secret from the Terraform CLI.


    Configure Integration to Use The GCP Secret

    From your Integration configuration page expand Secret Store, click on the GCP tab and enter the required secret Project and secret ID.

    Vault command-linearrow-up-right

  • HashiCorp Vault token

    • Create token using:

      • token create commandarrow-up-right


  • Create Secret in HashiCorp Vault

    You can use one of the following methods to create a secret in HashiCorp Vault to use in your integration.

    Enable Secret Engine

    If you did not set the VAULT_ADDR, VAULT_NAMESPACE, and VAULT_TOKEN environment variables, refer to the steps in the Create a Vault Cluster on HCParrow-up-right tutorial.

    1. Verify that the VAULT_NAMESPACE environment variable is set to admin.

      If not, be sure to set it before you continue.

    2. Enable key/value v2 secrets engine (kv-v2) at secret/.

    Create New Secret

    1. Store api-key with value ABC0DEFG9876 at the path secret/test/webapp.

      Example output:

    2. To verify, read back the secret at secret/test/webapp.

    Enable Secret Engine

    1. In the Vault UI, set the current namespace to admin/.

    1. Select Secrets engines.

    Update Apono Connector Configuration to Integrate with HashiCorp Vault

    Define vault in your connector using:

    • environment variable: export HASHICORP_VAULT_CONFIG='[{"address":"http://HASHICORP_VAULT_URL","token":"HASHICORP_VAULT_TOKEN"}]'

    • Read from file (docker secrets/secret file mount into the container): export HASHICORP_VAULT_CONFIG_FILE_PATH="/path/to/vault/config.json"

    circle-info

    To authenticate HashiCorp Vault with SSL/TLS client certificate you can use the following environment variable:

    [{"address":"http://HASHICORP_VAULT_URL","token":"HASHICORP_VAULT_TOKEN", "ca_cert_base64": "BASE64_HASHICORP_VAULT"}]

    To skip certificate verification use the following environment variable:

    [{"address":"http://HASHICORP_VAULT_URL","token":"HASHICORP_VAULT_TOKEN", "skip_verify": "true"}]

    Define HashiCorp Vault Fetch Secret Definition from Secret Manager

    You can define HashiCorp vault to fetch secret definition from AWS, GCP, Azure or Kubernetes secret managers using the following environment variable:


    Configure Integration to Use The HashiCorp Vault Secret

    From your Integration configuration page expand Secret Store, click on the HashiCorp tab and enter the required secret Secret engine and Secret path.

    Apono connector

    Under Organization details, copy your OrganizationID.

    In the Organizational structure section, copy the ID from the Root folder. This is the parent organizational unit for all accounts in your organization.

    herearrow-up-right
    prerequisites
    prerequisites
    AdministratorAccessarrow-up-right
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
            "organizations:*",
            "ssm:*",
            "cloudformation:*",
            "resource-explorer-2:*",
            "ram:*",
            "iam:PassRole"
          ],
          "Resource": "*"
        }
      ]
    }
    aws rds describe-db-instances \
      --filters "Name=engine,Values=mysql" \
      --query "*[].[DBInstanceIdentifier,Endpoint.Address,Endpoint.Port]"
    mysql -h [Endpoint.Address] -P [Endpoint.Port] -u USER_NAME -p
    CREATE USER 'USER_NAME'@'%' IDENTIFIED BY 'PASSWORD';
    GRANT SHOW DATABASES ON *.* TO 'USER_NAME'@'%';
    GRANT CREATE USER ON *.* TO 'USER_NAME'@'%';  
    GRANT UPDATE ON mysql.* TO 'USER_NAME'@'%';
    GRANT PROCESS ON *.* TO 'USER_NAME'@'%';
    GRANT SELECT ON *.* TO 'USER_NAME'@'%';
    GRANT GRANT OPTION ON *.* TO 'USER_NAME'@'%';
    GRANT EXECUTE,DROP,SELECT,ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,INDEX,INSERT,TRIGGER,UPDATE ON *.* TO 'USER_NAME'@'%';  
    GRANT GRANT OPTION ON *.* TO 'USER_NAME'@'%';
    GRANT ROLE_ADMIN on *.* to USER_NAME;
    aws rds describe-db-instances \
      --filters "Name=engine,Values=mysql" \
      --query "*[].[DBInstanceIdentifier,Endpoint.Address,Endpoint.Port]"
    aws rds describe-db-instances \
      --filters "Name=engine,Values=mysql" \
      --query "*[].[Endpoint.Address,Endpoint.Port]"
    
    mysql -h [Endpoint.Address] -P [Endpoint.Port] -u USER_NAME -p
    aws rds modify-db-instance \
        --db-instance-identifier DBInstanceIdentifier \
        --apply-immediately \
        --enable-iam-database-authentication
    mysql -h [Endpoint.Address] -P [Endpoint.Port] -u USER_NAME -p
    CREATE USER USER_NAME IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS';
    GRANT SHOW DATABASES ON *.* TO 'USER_NAME'@'%';
    GRANT CREATE USER ON *.* TO 'USER_NAME'@'%';  
    GRANT UPDATE ON mysql.* TO 'USER_NAME'@'%';
    GRANT PROCESS ON *.* TO 'USER_NAME'@'%';
    GRANT SELECT ON *.* TO 'USER_NAME'@'%';
    GRANT EXECUTE,DROP,SELECT,ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,INDEX,INSERT,TRIGGER,UPDATE ON *.* TO 'USER_NAME'@'%';  
    GRANT GRANT OPTION ON *.* TO 'USER_NAME'@'%';
    { "Version": "2012-10-17", "Statement": [ { "Action": "rds-db:connect", "Resource": "arn:aws:rds-db:::dbuser:*/USER_NAME", "Effect": "Allow" } ] }
    aws iam create-policy --policy-name RDSConnectPolicy --policy-document '{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "rds-db:connect"
                ],
                "Resource": [
                    "arn:aws:rds-db:*:*:dbuser:*/${SAML:sub}"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "rds:DescribeDBInstances"
                ],
                "Resource": [
                    "arn:aws:rds:*:*:db:*"
                ]
            }
        ]
    }'
    mysql -h [Endpoint.Address] -P [Endpoint.Port] -u USER_NAME -p
    CREATE USER 'USER_NAME'@'%' IDENTIFIED BY 'PASSWORD';
    GRANT SHOW DATABASES ON *.* TO 'USER_NAME'@'%';
    GRANT CREATE USER ON *.* TO 'USER_NAME'@'%';  
    GRANT UPDATE ON mysql.* TO 'USER_NAME'@'%';
    GRANT PROCESS ON *.* TO 'USER_NAME'@'%';
    GRANT SELECT ON *.* TO 'USER_NAME'@'%';
    GRANT EXECUTE,DROP,SELECT,ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,INDEX,INSERT,TRIGGER,UPDATE ON *.* TO 'USER_NAME'@'%';  
    GRANT GRANT OPTION ON *.* TO 'USER_NAME'@'%';
    GRANT ROLE_ADMIN on *.* to USER_NAME;
    CREATE USER 'USER_NAME'@'%' IDENTIFIED BY 'PASSWORD';
    GRANT SHOW DATABASES ON *.* TO 'USER_NAME'@'%';
    GRANT CREATE USER ON *.* TO 'USER_NAME'@'%';  
    GRANT UPDATE ON mysql.* TO 'USER_NAME'@'%';
    GRANT PROCESS ON *.* TO 'USER_NAME'@'%';
    GRANT SELECT ON *.* TO 'USER_NAME'@'%';
    GRANT EXECUTE,DROP,SELECT,ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,INDEX,INSERT,TRIGGER,UPDATE ON *.* TO 'USER_NAME'@'%';  
    GRANT GRANT OPTION ON *.* TO 'USER_NAME'@'%';
    GRANT ROLE_ADMIN on *.* to USER_NAME;
    $ echo $VAULT_NAMESPACE
    admin
    kubectl create secret generic <SECRET_NAME> --from-literal=<KEY1>=<VALUE1> --from-literal=<KEY2>=<VALUE2>
    kubectl label secret <SECRET_NAME> "apono-connector-read=true"
    helm upgrade apono-connector apono-connector --repo https://apono-io.github.io/apono-helm-charts \
        --set-string apono.token=<APONO_TOKEN> \
        --set-string apono.connectorId=<CONNECTOR_NAME> \
        --set serviceAccount.manageClusterRoles=true \
        --set allowedSecretsToRead={secret1\,secret2\,secret3} \
        --namespace apono-connector 
    aws secretsmanager create-secret \
    --name "<SECRET_NAME>" \
    --tags '[{"Key":"apono-connector-read","Value":"true"}]' \
    --region <REGION> \
    --secret-string '{"KEY1":"VALUE1","KEY2":"VALUE2"}'
    az keyvault secret set \
    --vault-name "<KEYVAULT_NAME>" \
    --name "<SECRET_NAME>" \
    --value '{"<KEY1>": "<VALUE1>", "<KEY2>": "<VALUE2>"}'
    gcloud secrets create <SECRET_NAME> \
        --replication-policy="<REPLICATION-POLICY>" \
        --data-file=-
    
    gcloud secrets versions access 1 --secret='{"KEY1":"VALUE1","KEY2":"VALUE2"}'
    HASHICORP_VAULT_CONFIG='[{"address":"http://HASHICORP_VAULT_URL","token":"HASHICORP_VAULT_TOKEN"},
    {"from_secret_store": "AWS", "region": "AWS_REGION", "secret_id": "AWS_SECRET_ID",},
    {"from_secret_store": "GCP", "project": "GCP_PROJECT_ID", "secret_id": "GCP_SECRET_ID"},
    {"from_secret_store": "AZURE", "AZURE_KEY_VAULT_URL": "vault_url", "name": "SECRET_NAME"},
    {"from_secret_store": "KUBERNETES", "NAMESPACE": "namespace", "name": "SECRET_NAME"}
    ]'
    must be defined.
    Integration Owner
    must also be defined.
    resource owner
    resource owners
    Amazon RDS consolearrow-up-right
    aws rds modify-db-instance \
        --db-instance-identifier DBInstanceIdentifier \
        --apply-immediately \
        --enable-iam-database-authentication
    mysql -h [Endpoint.Address] -P [Endpoint.Port] -u USER_NAME -p
    CREATE USER USER_NAME IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS';
    GRANT SHOW DATABASES ON *.* TO 'USER_NAME'@'%';
    GRANT CREATE USER ON *.* TO 'USER_NAME'@'%';  
    GRANT UPDATE ON mysql.* TO 'USER_NAME'@'%';
    GRANT PROCESS ON *.* TO 'USER_NAME'@'%';
    GRANT SELECT ON *.* TO 'USER_NAME'@'%';
    GRANT EXECUTE,DROP,SELECT,ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,INDEX,INSERT,TRIGGER,UPDATE ON *.* TO 'USER_NAME'@'%';  
    GRANT GRANT OPTION ON *.* TO 'USER_NAME'@'%';
    { "Version": "2012-10-17", "Statement": [ { "Action": "rds-db:connect", "Resource": "arn:aws:rds-db:::dbuser:*/USER_NAME", "Effect": "Allow" } ] }
    aws iam create-policy --policy-name RDSConnectPolicy --policy-document '{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "rds-db:connect"
                ],
                "Resource": [
                    "arn:aws:rds-db:*:*:dbuser:*/${SAML:sub}"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "rds:DescribeDBInstances"
                ],
                "Resource": [
                    "arn:aws:rds:*:*:db:*"
                ]
            }
        ]
    }'
    CREATE USER USER_NAME IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS';
    GRANT SHOW DATABASES ON *.* TO 'USER_NAME'@'%';
    GRANT CREATE USER ON *.* TO 'USER_NAME'@'%';  
    GRANT UPDATE ON mysql.* TO 'USER_NAME'@'%';
    GRANT PROCESS ON *.* TO 'USER_NAME'@'%';
    GRANT SELECT ON *.* TO 'USER_NAME'@'%';
    GRANT EXECUTE,DROP,SELECT,ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,INDEX,INSERT,TRIGGER,UPDATE ON *.* TO 'USER_NAME'@'%';  
    GRANT GRANT OPTION ON *.* TO 'USER_NAME'@'%';
    { "Version": "2012-10-17", "Statement": [ { "Action": "rds-db:connect", "Resource": "arn:aws:rds-db:::dbuser:*/USER_NAME", "Effect": "Allow" } ] }
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
            "Action": [
              "rds-db:connect"
            ],
          "Resource": [
            "arn:aws:rds-db:*:*:dbuser:*/${SAML:sub}"
          ]
        },
        {
          "Effect": "Allow",
            "Action": [
              "rds:DescribeDBInstances"
            ],
            "Resource": [
              "arn:aws:rds:*:*:db:*"
            ]
          }
      ]
    }
    On the Create a secret screen choose the following values:
    • Upload options: Manual.

    • Name: Type a name for the secret. The secret name must be unique within a Key Vault. The name must be a 1-127 character string, starting with a letter and containing only 0-9, a-z, A-Z, and -. For more information on naming, see Key Vault objects, identifiers, and versioningarrow-up-right

    • Value: Type a value for the secret. Key Vault APIs accept and return secret values as strings.

    • Leave the other values to their defaults. Select Create.

    In the Secret value field, enter my super secret data.

  • Click the Create secret button.

  • Example output:

    Click Enable new engine.

  • Select KV from the list, and then click Next.

    1. Enter secret in the Path field.

    2. Click Enable Engine to complete.

    Now that you have a secret engine enabled, you will create a new secret.

    Create New Secret

    1. Click Create secret. Enter test/webapp in the Path for this secret field.

    2. Under the Secret data section, enter api-key in the key field, and ABC0DEFG9876 in the value field. You can click on the sensitive information toggle to show or hide the entered secret values.

    Create secret page
    Secret Managerarrow-up-right
    Terraformarrow-up-right
    Go to the Secret Manager pagearrow-up-right
    Terraformarrow-up-right
    HCP portalarrow-up-right
    terraform {
      required_providers {
        kubernetes = {
          source = "hashicorp/kubernetes"
          version = "2.32.0"
        }
        helm = {
          source = "hashicorp/helm"
          version = "2.15.0"
        }
      }
    }
    
    provider "helm" {
      kubernetes {
        config_path = "~/.kube/config"
      }
    }
    
    resource "kubernetes_secret" "apono-k8s-secret" {
      metadata {
        name = "<SECRET_NAME>"
        namespace = "<NAMESPACE>"
        labels = {
          "apono-connector-read" = "true"
        }
      }
    
      data = {
        <KEY1> = "<VALUE1>"
        <KEY2> = "<VALUE2>"
      }
      
      type = "Opaque"
    }
    
    resource "helm_release" "apono-helm" {
      name       = "apono-connector"
      repository = "https://apono-io.github.io/apono-helm-charts"
      chart      = "apono-connector"
      namespace  = "<NAMESPACE>"
    
      set {
        name  = "apono.token"
        value = "<APONO_TOKEN>"
        type  = "string"
      }
    
      set {
        name  = "apono.connectorId"
        value = "<CONNECTOR_NAME>"
        type  = "string"
      }
    
      set {
        name  = "serviceAccount.manageClusterRoles"
        value = "true"
      }
      
      set {
        name  = "allowedSecretsToRead"
        value = "{secret1\,secret2\,secret3}"
      }
    }
    resource "aws_secretsmanager_secret" "<SECRET_NAME>" {
      name = "<SECRET_NAME>"
      // This tag allows the Apono connector role to read the secret with predefined policy 
      tags = {
        "apono-connector-read" = "true"
      }
    }
    
    resource "aws_secretsmanager_secret_version" "<SECRET_NAME>" {
      secret_id     = aws_secretsmanager_secret.<SECRET_NAME>.id
      secret_string = jsonencode({
        KEY1 = "VALUE1",
        KEY2 = "VALUE2"
      })
    }
    data "azurerm_key_vault" "<KEY_VAULT>" {
      name                = "<KEY_VAULT_NAME>"
      resource_group_name = "<KEY_VAULT_RESOURCE_GROUP_NAME>"
    }
    
    resource "azurerm_key_vault_secret" "<SECRET_NAME>" {
      name         = "<SECRET_NAME>"
      value        = '{"<KEY1>": "<VALUE1>", "<KEY2>": "<VALUE2>"}'
      key_vault_id = azurerm_key_vault.<KEY_VAULT>.id
    }
    resource "google_secret_manager_secret" "<SECRET_NAME>" {
      secret_id = "<SECRET_NAME>"
    
      replication {
        <REPLICATION-POLICY>
      }
    }
    
    resource "google_secret_manager_secret_version" "<SECRET_NAME>-version" {
      secret = google_secret_manager_secret.<SECRET_NAME>.id
    
      secret_data = '{"KEY1":"VALUE1","KEY2":"VALUE2"}'
    }
    $ export VAULT_NAMESPACE=admin
    $ vault secrets enable -path=secret kv-v2
    Success! Enabled the kv-v2 secrets engine at: secret/
    $ vault kv put secret/test/webapp api-key="ABC0DEFG9876"
    Key              Value
    ---              -----
    created_time     2021-06-17T02:48:51.643350733Z
    deletion_time    n/a
    destroyed        false
    version          1
    $ vault kv get secret/test/webapp
    ====== Metadata ======
    Key              Value
    ---              -----
    created_time     2021-06-17T02:48:51.643350733Z
    deletion_time    n/a
    destroyed        false
    version          1
    
    ===== Data =====
    Key        Value
    ---        -----
    api-key    ABC0DEFG9876