Automation Reference Guide

Learn how to use automation recipes to manage tasks such as creating, deploying, or updating containers.

Automation Examples Installing the Akana API Platform 2018.0.x

Supported Platforms: 8.4.x and later

Table of Contents

  1. Overview
  2. Benefits of using automation recipes
  3. Recipe Structure
    1. Includes
    2. Phases
    3. Environment
    4. Repositories
    5. Features
    6. Bundles
    7. Configurations
    8. Tasks
  4. Property Substitution
  5. Running Recipes: Configuring Containers
    1. Recipe Properties
    2. Phases
    3. Logging
    4. Simple Logger
  6. Running Recipes Remotely

Overview

The availability of a full set of container administration services enables the use of declarative configuration recipes. Recipes are simply JSON documents that describe the features, bundles, configurations, and tasks that should be performed on a container. Recipes are interpreted by a recipe execution script that is part of the Akana Platform installation ZIP file and can be used to create an instance from scratch or to modify an existing one. By using recipes, you can automate complex configurations without having to resort to custom scripting. Container customizations can also be captured in recipes to facilitate repeatable deployment of non-standard configurations or features.

The recipe approach is declarative: recipes state what is to be done, not how it is done.

All modules necessary to process recipes are installed with the Akana Platform. For download location, see Installing the Akana API Platform 2018.0.x (Step 1).

back to top

Benefits of using automation recipes

Some of the benefits of using automation recipes:

  • Automation is much faster than manual installation.
  • Using recipes makes it easier to eliminate human error from the installation and configuration process.
  • Recipes make it possible to capture custom configurations and re-use them.

back to top

Recipe Structure

A recipe is a JSON document as described by the recipe-schema.json file. By default this file is included in the <installation>/recipes folder.

Includes

To encourage reuse, and to allow configurations to be captured in a repeatable way, recipes can include other recipes. You can reference included recipes with a relative path to the originating file, or by using an absolute URL. Some recipes are simply a set of inclusions, and perform no work on their own. For example, you could use the following recipe, which references four recipes within it, to deploy a PM instance.

{
  "name":"pm-complete",
  "description":"Complete PM Container Recipe on MySQL",
  "phases":[
    "create",
    "deploy"
  ],
  "includes":[
    {
      "location":"bootstrap.json"
    },
    {
      "location":"pm.json"
    },
    {
      "location":"container-identity.json"
    },
    {
      "location":"mysql.json"
    }
  ]
}

In this example, the bootstrap.json recipe is used to create a basic runtime instance. This would be common across almost all instances. The pm.json recipe would deploy PM features, and any PM-specific configuration. As this example shows, recipes can be as fine-grained as necessary. The two remaining recipes perform all configuration necessary to configure a container's identity, and to deploy a MySQL configuration. Both of these would be reusable in many other deployments.

Order of includes does not matter. There is a default limit of 10 on the depth of includes.

The schema for the includes section is shown below.

"includes":{
  "type":"array",
  "description":"Included recipes",
  "uniqueItems":true,
  "items":{
    "type":"object",
    "required":[
      "location"
    ],
    "properties":{
      "name":{
        "description":"A name for the included recipe",
        "type":"string"
      },
      "location":{
        "description":"The included recipe URL",
        "type":"string",
        "format":"uri"
      },
      "username":{
        "description":"The optional user name to use to access the recipe URL",
        "type":"string"
      },
      "password":{
        "description":"The optional password to use to access the recipe URL",
        "type":"string"
      }
    }
  }
}

Phases

Certain recipes should only be executed when the instance under configuration is in a certain state. By using phases, recipes can declare when they should and should not execute, based on the runtime state or other conditions.

The pre-defined phases depend on how the recipe is executed, but can include create, deploy, update, finalize, or any user-defined phases. Recipes that specify the create phase are executed when the instance is being created; recipes that specify the deploy phase execute when the container is running. These can be overridden with custom phases, and can be useful to also control order of execution if necessary. Phases are simply listed as an array of strings.

If any value in the array is asterisk/star (*), the recipe is applicable to all phases.

The schema for the phases section is quite simple:

"phases":{
  "description":"The execution phases for the recipe",
  "type":"array",
  "items":{
    "type":"string"
  }
}
Create phase

Since the create phase must create the required filesystem structure, it can only be executed on the same system as the target instance.

This phase is only supported by the akana.container module (see Running Recipes: Configuring Containers below).

Environment

The Environment section allows execution of a recipe to be controlled based on the presence or absence of a bundle or set of bundles or features. The required section lists the bundles that are required for the recipe to execute, while the unless section lists bundles whose presence causes the recipe to be skipped. As the schema below shows, both required and unless sections can specify a bundle or feature symbolic name, and optionally a version to test for. Alternatively, a filter can be used to match against any bundle headers.

"environment":{
  "type":"object",
  "properties":{
    "required":{
      "type":"array",
      "description":"Any bundles required to be deployed for this recipe to run.",
      "items":{
        "type":"object",
        "required":[
          "symbolicName"
        ],
        "properties":{
          "symbolicName":{
            "type":"string"
          },
          "version":{
            "description":"A version or version range",
            "type":"string"
          },
          "filter":{
            "description":"A filter to use against bundle headers.  An alternative to the symbolicName and version fields.",
            "type":"string"
          }
        }
      }
    },
    "unless":{
      "type":"array",
      "description":"Any bundles required to not be deployed for this recipe to run.",
      "items":{
        "type":"object",
        "required":[
          "symbolicName"
        ],
        "properties":{
          "symbolicName":{
            "type":"string"
          },
          "version":{
            "description":"A version or version range",
            "type":"string"
          },
          "filter":{
            "description":"A filter to use against bundle headers.  An alternative to the symbolicName and version fields.",
            "type":"string"
          }
        }
      }
    }
  }
}

Repositories

Any number of repositories can be configured for use by the recipe. Each repository is specified using a location, and an optional username and password to support basic authentication. The location is expected to point to a repository descriptor.

If a recipe is being executed on the same system as the instance being configured, the repository location can point to a ZIP file (the file name ends with .zip). The recipe downloads and uncompresses the file, and searches the contents for a repository.xml file. If this file is found, the repository is added to the system for the duration of recipe execution.

"repositories":{
  "type":"array",
  "description":"Repositories to examine for bundles and features",
  "uniqueItems":true,
  "items":{
    "type":"object",
    "required":[
      "location"
    ],
    "properties":{
      "location":{
        "description":"The repository URL",
        "type":"string",
        "format":"uri"
      },
      "username":{
        "description":"The optional user name to use to access the repository",
        "type":"string"
      },
      "password":{
        "description":"The optional password to use to access the repository",
        "type":"string"
      }
    }
  }
}

Features

Features can be installed, uninstalled, or updated. Features are always installed from the configured repositories, and will always be resolved before they can be installed along with their dependencies. As the schema below shows, features are always referenced using simply the symbolic name.

"features":{
  "type":"object",
  "description":"Feature actions",
  "properties":{
    "install":{
      "description":"The features to install",
      "type":"array",
      "uniqueItems":true,
      "items":{
        "type":"string"
      }
    },
    "uninstall":{
      "description":"The features to uninstall",
      "type":"array",
      "uniqueItems":true,
      "items":{
        "type":"string"
      }
    },
    "update":{
      "description":"The features to update",
      "type":"array",
      "uniqueItems":true,
      "items":{
        "type":"string"
      }
    }
  }
}

Bundles

As with Features, bundles can be installed, uninstalled, or updated. Bundles can be installed from a URL, which can be a file, HTTP, HTTPS, or OBR URL. OBR URLs call out a specific bundle within the configured repositories using the format obr://<symbolic name>[/<version>]. If an OBR URL is used, the specified bundle will be resolved and its dependencies will also be installed. For other URL formats such as HTTP/HTTPS, the specification of the bundle to install can include a convert attribute which indicates that the target is a regular JAR file, and should be converted to an OSGi bundle before deployment. In the schema below, for uninstall and update the bundles are identified using their IDs. This is not the bundle ID number from the framework, but rather the symbolic name and an optional version in the format <symbolic name>[/<version>].

"bundles":{
  "type":"object",
  "description":"Bundle actions",
  "properties":{
    "install":{
      "description":"The bundles to install",
      "type":"array",
      "uniqueItems":true,
      "items":{
        "type":"object",
        "required":[
          "location"
        ],
        "properties":{
          "location":{
            "type":"string",
            "format":"uri"
          },
          "username":{
            "type":"string"
          },
          "password":{
            "type":"string"
          },
          "convert":{
            "type":"boolean"
          }
        }
      }
    },
    "uninstall":{
      "description":"The IDs of any bundles to uninstall",
      "type":"array",
      "uniqueItems":true,
      "items":{
        "type":"string"
      }
    },
    "update":{
      "description":"The IDs of any bundles to update",
      "type":"array",
      "uniqueItems":true,
      "items":{
        "type":"string"
      }
    }
  }
}

Configurations

The Configurations section deals with modifications to the Configuration Admin. Configurations can be added, modified, or removed. Configurations are identified using the pid property, or in the case of a factory configuration, using the factoryPid property. To update or delete factory configurations, you can combine the factory PID with a selection filter, which is a regular OSGi (LDAP) filter that can be used to isolate a specific factory configuration.

Two special PIDs are supported in recipes:

  • system—Indicates that the configuration is for system properties.
  • framework—Indicates that the configuration is for framework properties.

A sample Configurations section is shown below.

"configurations":{
  "type":"array",
  "description":"Configuration actions",
  "uniqueItems":true,
  "items":{
    "type":"object",
    "properties":{
      "pid":{
        "description":"The configuration PID,  The special value 'system' refers to system properties.",
        "type":"string"
      },
      "factoryPid":{
        "description":"The configuration factory PID.  If specified, a selector should be used to identify the correct configuration.",
        "type":"string"
      },
      "selector":{
        "description":"A normal OSGi filter used to identify the correct configuration when a factory PID is specified.",
        "type":"string"
      },
      "location":{
        "description":"The location of a properties file that will overwrite the specified PID.",
        "type":"string",
        "format":"uri"
      },
      "add":{
        "description":"Properties to add.  If the property already exists, no action will be taken.",
        "type":"object"
      },
      "delete":{
        "description":"Properties to remove.  If there is a value of '*' the PID will be removed.",
        "type":"array",
        "items":{
          "type":"string"
        },
        "uniqueItems":true
      },
      "update":{
        "description":"Properties to update",
        "type":"object"
      }
    }
  }
}

Recipes support the deployment of regular Java properties files to the ConfigAdmin by specifying its location. As the example below shows, the contents of the file can also be manipulated before the configuration is deployed.

"configurations":[
  {
    "pid":"system",
    "location":"${PRODUCT_HOME}/config/container-system.properties",
    "add":{
      "org.osgi.service.http.port":"${DEFAULT_PORT|9900}",
      "com.soa.http.host":"${DEFAULT_HOST|0.0.0.0}",
      "container.name":"${CONTAINER_NAME}",
      "org.eclipse.jetty.servlet.SessionCookie":"JSESSIONID_${nospace:CONTAINER_NAME}"
    }
  }
]

Tasks

A recipe's Tasks Section controls the execution of workflow tasks in an autonomous way. Workflow tasks would generally be associated with a wizard-style user interface. Tasks are modeled as a series of steps, each of which is executed by providing it with a set of properties.

Tasks and steps provide metadata through the Tasks Service, which is part of the Akana Platform REST API and defines the properties required to complete each step of a task.

Because recipes list the properties to be provided to individual steps, you can use then to execute tasks without knowing the details of the work to be done.

"tasks":{
  "type":"array",
  "description":"Task workflow actions",
  "uniqueItems":true,
  "items":{
    "type":"object",
    "required":[
      "name"
    ],
    "properties":{
      "name":{
        "description":"The task name.",
        "type":"string"
      },
      "presentationName":{
        "description":"The task name.",
        "type":"string"
      },
      "condition":{
        "description":"An optional condition to evaluate that should yield true only if the task is to be executed.",
        "type":"string"
      },
      "steps":{
        "description":"A set of step configurations.",
        "type":"array",
        "items":{
          "type":"object",
          "required":[
            "name"
          ],
          "properties":{
            "name":{
              "description":"The step name",
              "type":"string"
            },
            "presentationName":{
              "description":"The step name",
              "type":"string"
            },
            "properties":{
              "description":"Property values for this step",
              "type":"object"
            }
          }
        }
      }
    }
  }
}

The example below illustrates how database configuration for a MySQL server might be done using a task definition. Note that the ordering of steps in the recipe is not relevant, ordering is controlled by the Task Service and by the dependencies declared by each task definition. Also, any defined defaults for property values are used, so it is valid to omit anywhere the default is sufficient, or even to specify an empty properties section if the defaults are acceptable.

"tasks":[
  {
    "name":"com.soa.database",
    "steps":[
      {
        "name":"db.config.action",
        "properties":{
          "db.config.action":"create.new.database"
        }
      },
      {
        "name":"select.database",
        "properties":{
          "database.selected":"mysql"
        }
      },
      {
        "name":"set.database.options",
        "properties":{
          "admin.username":"${DB_USERNAME|root}",
          "admin.password":"${DB_PASSWORD|password}",
          "user":"${DB_USERNAME|user123}",
          "password":"${DB_PASSWORD|password}",
          "server":"${DB_SERVER|localhost}",
          "port":"${DB_PORT|3306}",
          "database":"${DB_NAME|mydb}"
        }
      }
    ]
  }
]

back to top

Property Substitution

As some of the examples have shown, in many places recipes can accept variables defined using an Ant-like syntax. Variables can be passed into recipes in several ways, including:

  • By using system properties
  • By defining environment variables
  • From a file, when using the platform recipe script

Supplementary information about property substitution is below.

Simple Usage
As with Ant, variable names are declared within the standard naming pattern, ${<variable name>}.
Defaults
Defaults can be specified using a pipe between the variable name and the default value, as ${<variable name>|<default value>}. For example ${PM_PORT|9900}.
Directives
You can use directives to condition values. The currently defined directives are:
  • nospace—Replaces all spaces with an underscore to make the value safe in situations that cannot tolerate spaces in values.
  • hash—From the value, generates a hash that is compatible with the general password hashing algorithm used by the platform.
  • url—Ensures that the value is a URL. This is particularly useful to translate file names into URLs.
  • abs—Interprets the value as a path and converts it to an absolute pathname.
Directives precede the variable name, and are separated from the name using a colon (:). For example ${nospace:CONTAINER_NAME}. Directives are applied after any defaults have been resolved.
Macros
Macros are used to calculate defaults rather than using a static value. The macros currently defined are:
  • uuid()—Generates a UUID, for example ${NODE_ID|uuid()}.
  • uuid32()—Generates a UUID with only 32 characters. This is a constraint on key lengths imposed by PM. For example ${CONTAINER_KEY|uuid32()}.
  • now()—The current time is available using the now() macro. The value is in milliseconds since epoch form.
  • hostname()—Returns the hostname of the machine running the recipe.
You can also use macros on their own, without an associated property name. For example, to get the current timestamp, you can use the following:
${now()}

back to top

Running Recipes: Configuring Containers

The most common way to use recipes is through the new akana.container Python module. This is an updated version of the legacy soa.container module that has traditionally been used to create container instances. The new module also allows containers to be created, but all logic is encapsulated in recipes rather than embedded in Java or Python code.

The platform also includes a basic set of recipes in the $INSTALL_DIR/recipes directory. These recipes allow for the creation of basic container, and are often used by higher-level recipes to establish the foundation for other product features. Generally, additional recipes are placed into $INSTALL_DIR/recipes when the deployment ZIP files for individual products are expanded.

Using the akana.container module is quite simple. From the $INSTALL_DIR/bin directory, it is as simple as:

> ./jython.sh -m akana.container --recipe <recipe name>

This is the most basic usage syntax. The -m option tells Jython to use the akana.container module. In general however, recipes require properties in order to control the configuration. These can be passed in using multiple mechanisms.

Recipe Properties

As mentioned earlier, all properties in a recipe are identified using an Ant-like syntax: ${<property name>}. Recipes can extract properties from a number of places. The system searches for a match for property definitions in these locations, in order:

  1. In any configured properties files.
  2. In system properties.
  3. In environment variables.

To use a properties file, use the --props command-line option:

> ./jython.sh -m akana.container --recipe <recipe name> --props recipe.properties

Properties files are quite powerful, because any properties go through variable substitution. In other words, values in a properties file can include references to other properties from the properties file, from system properties, or from environment variables. This allows properties files to be a little more general. It also allows properties files to map system or environment variables to the property names expected by any executed recipes. For example, the following properties file could be used to create a basic container:

INSTALL_DIR=${user.dir}/..
CONTAINER_NAME=${container.name}
DEFAULT_PORT=${default.port}

If the recipe is executed from the installation's bin directory, INSTALL_DIR is the top-level installation folder. In the same way, you can define the container.name and default.port properties as system properties or environment variables. You could even hard-code the values in the recipe.

Phases

By default, the akana.container module runs through the phases: create, deploy, update, and finalize. As mentioned earlier, the create phase has special meaning to the akana.container module. If a recipe is executed in the create phase and the container does not already exist, the akana.container module tries to create the basic file structure for the instance using the config wizard logic. It then runs the create recipes and tries to start the container. If the instance already exists, any create phase recipes are ignored.

You can control what phases execute using the --phases or -p command-line options. For example, the command below executes only the deploy phase.

> ./jython.sh -m akana.container --recipe <recipe name> --props recipe.properties --phases deploy

You can also use this to create custom phases and execute them in any sequence.

Logging

When running a recipe, logging is very helpful to track down problems or monitor the progress of a recipe. The recipe scripts use SLF4J, and since no default implementation is shipped with the scripts, it is sometimes necessary to provide one. To deploy a logger implementation, simply drop the JAR/bundle into $INSTALL_DIR/lib/script.

Simple Logger

The simplest SLF4J logger implementation is slf4j-simple, which logs everything to stderr. The JAR for this logger implementation is slf4j-simple-1.7.19.jar, available in the {install_dir}/lib/ext folder of your installation. If you deploy this file to the $INSTALL_DIR/lib/script folder, it is usually necessary to provide the following system properties in order to control the logger behavior:

  • org.slf4j.simpleLogger.logFile=System.out

    By default, all output goes to stderr, which might not be useful. This option directs all output to stdout.

  • org.slf4j.simpleLogger.defaultLogLevel=info

    The default log level is info, but it can be set to the usual selection of debug, error, warning etc. using this system property.

To execute a recipe using the simple logger, the command line might look like the below:

> ./jython.sh -Dorg.slf4j.simpleLogger.logFile=System.out -m akana.container --recipe <recipe name> --props <properties file>

Running Recipes Remotely

The akana.container module runs in the same deployment as the target container. This allows it to create and control instances. However, recipes can be executed remotely against a container using only the container URL (and the appropriate credentials). The container must already be running, since it cannot be created or started remotely. The create phase is ignored when executing remotely.

To run a recipe against a remote instance, use the akana.recipe module. For example:

> ./jython.sh -m akana.recipe --recipe <recipe name> --url http://localhost:9900 --user administrator --password password

The parameters are:

  • --recipe Specifies the recipe location.
  • --url The URL of the instance to run the recipe against. Only the scheme, host, and port are relevant.
  • --user The username to use when running the recipe.
  • --password The password corresponding to the user name.
  • --props The location of a properties file to use.
  • --phases A colon-separated list of phases to execute. By default, this is deploy:update:finalize.

back to top