Introduction

The OpsRamp’s Batch Export integration enables you to extract and store performance data from their IT environment. These metrics help track the health, availability, and efficiency of critical infrastructure components such as servers, applications, networks, and cloud resources.

The data is exported to external storage solutions like Amazon S3, Azure Blob Storage, enabling you to monitor performance trends over time, optimize resource utilization through data-driven insights. and integrate with external analytics tools for deeper analysis.

Install Batch Export integration

Here is how to configure a batch export for the Metrics category type.

  • To configure other types of export categories, refer Export Categories document for more details.
  1. Navigate to Setup → Account. The Account Details screen is displayed.

  2. Select the Integrations tile. The Installed Integrations screen is displayed, with all the installed applications.

  3. If you do not have any installed applications, you will be navigated to the Available Integrations screen. The Available Integrations screen displays all the available applications along with the newly created application with the version.
    Note: Search for Batch Export application using the search option available. Alternatively, search for Exports from All Categories option and select it.

  4. Click +ADD in Batch Export tile. The ADD BATCH EXPORT screen is displayed.


  5. Enter the following basic details:

    Mandatory fields
    GENERAL DETAILS
    Field NameField TypeDescription
    NameStringUnique name of the export.
    Category TypeDropdownType of data to export: Metrics.
    ClientDropdownSelect the client for whom you want to export the data.
    Export toDropdownSelect the integration: AWS S3 or Blob Storage.
    If the integration is not available, you can create one:
    1. Click anywhere in the dropdown and click +ADD.
      The ADD INTEGRATION window is displayed.


    2. Select an Integration type from the dropdown and enter the required information in the fields.
    • For AWS S3:
      1. Name: Name of the Integration.
      2. Access Key ID: Unique identifier for the Amazon S3 bucket.
      3. Secret access key: Key and secret generated from the portal.
      4. Confirm Secret access key: Re-enter the secret access key.
      5. Bucket Name: Name of the AWS S3 bucket for the export data.
      6. Base URI: Data location in the AWS S3 bucket.
        Example: https://s3.regionName.amazonaws.com.
      7. Click ADD. The integration is added.
    • For Blob Storage:
      1. Name: Enter the integration name.
      2. Storage account name: Azure Blob account name.
      3. Secret access key: Access key generated from the portal.
      4. Confirm Secret access key: Reenter the secret access key.
      5. Container name: Name of the Azure container for the export data.
      6. Base URI: Data location in the container.
        Example: https://portal.azure.com
      7. Click ADD. The integration is added.
    New JSON/Old JSONRadiobuttonThe data is exported in the selected format.
    Failure Export NotificationCheckboxIf you enable this option, you will receive a notification if the export fails.

    SCHEDULE: Metric data will be exported every hour.

  6. Click FINISH. The integration is installed.

If the provided information is correct, then the integration will be saved without any errors.

Note: The availability of metrics data on the configured buckets (AWS S3 or Azure blob storage) starts from the next hour.
Example: If the request was made at 13:00 GMT, 13:20 GMT, or 13:40 GMT, the data would be available on AWS S3 or Azure blob storage after an hour (only after 1400 GMT). 

There is change in file name format. See View Metric Type Batch Export for more information.

View Metric Type Batch Export

You can view the latest file format for AWS S3 metric batch export. The export file name has the following encoding:


  • (A) schedule of batch export, recurring or on-demand
  • (B) batch export type
  • (C) unique client ID
  • (D) schedule starting timestamp
  • (E) unique id of the file
  • (F) unique timestamp of the file