Skip to main content

Solution development

Now that we have run through the relevant concepts such as schema components and structure entities, properties and instances, we will build on the project we defined earlier.

Considerations#

Before building the technical solution we will step through some important related considerations.

Offline datasources#

Normally ELARA operates based on live streaming data, in some cases however such as in developing proof-of-concepts it is useful to build offline solutions, as well as for examples. For this reason to provide the whole solution source, the following implementation will assume that all external systems are static file streams; in this case the incremental changes propogated from each datasource will occur as a single batch change containing all data.

Expressions#

It is worth explaining the basics of the ELARA Expression language prior to development. An Expression is either a data Value or Function based on an abstract syntax tree of type EastType (ELARA Abstract Syntax Tree Type); in other words data in ELARA can be thought of as strongly typed functional (streaming) data expressions. As described previously when introducing the concept of properties, any on of or any combination of the following types are valid:

Primitive TypesCollection Types
  • string
  • datetime
  • boolean
  • float
  • integer
  • dict
  • set
  • array
  • struct

As an example, a valid expression might be a Variable representing a sale of some goods based on some complex nested type such as:

let expression = Variable(    // the variable name    'a variable name',     // the variable type    StructType({        // the variable is an  object with some         //  primititve types        identifier: 'string',        date: 'datetime',        amount: 'float',        qty: 'integer',        isActivated: 'boolean',        // as well as a set        categories: 'set',        // and also an array of objects        lineItems: ArrayType(            StructType({                // containing a dictionary                prices: DictType('integer')                // another object                lineAmount: StructType({                    identifier: string,                    date: 'datetime',                    amount: 'float',                }),                // and another set                tags: 'set',            }),        )     }));

Where a Variable is a named Function type Expression; the full syntax with over seventy Function type examples can be found in the expression module reference, below is a selection:

  • Print: Print a primitive value to a string (with an optional format string)
  • Let: Defines a new variable in scope that can be accessed by the inner `ast`
  • IfElse: Return x1 of predicate is true, or x2 otherwise
  • Convert: Convert `from` to the specified `type`
  • Base64ToAscii: Convert a Base64 string to an ASCII string.
  • ToJson: Return a string containing a JSON encoding the East value
  • Intersect: Return a set which is the intersection of `first` and `second`
  • Insert: Return an array Expression where `value` has been inserted to the end of `collection`
  • Get: Get a value from struct Expression corresponding to the given `key`. If `key` is not found, returns null.
  • ToDict: Return a dictionary where each entry is calculated from elements of the input dictionary Expression.
  • Filter: Return a dictionary containing only values where the `predicate` was `true`.
  • Sort: Sort an array Expression in place, according an ordering defined by the comparison function `isless`.
  • Reduce: Loop over the values in an array Expression in order, performing a reduction.
  • MapDict: Return a dictionary mapping the values of a dictionary Expression through a function.

The EDK lib package also provides a StdLib module including functions that return commonly used Expression patterns, below is a selection:

  • Floor: Round datetime `value` down to a whole time `unit` ("year", "month", "week", "day", "hour", "minute", "second").
  • ConvertDict: Convert a float DictType to an integer DictType
  • PrintTruncatedCurrency: Return a comma seperated and rounded currency string form a number or integer
  • DayNameShort: Return a Print function to convert a datetime Expression to an abbreviated day of week name
  • AddAll: All collection values into a single float or integer Expression.
  • ToEntries: Return a ArrayType EastFunction to convert a dictionary into an array of key & value pairs

Plugins#

Previously when describing the enterprise schema, we also briefly mentioned the relevance of a plugin. As a recap, while an ELARASchema is a partial of full description of a solution, each component within the ELARASchema is really a structured object that contains Expression properties. For example a RestApiDataSource is a structured object where properties such as the OAuth2 authentication token, and endpoint urls and response bodies, are all defined as an Expression.

Given the modular nature of a ELARASchema, it is common practice to apply the mergeSchema method to merge multiple ELARASchema objects into a single one. The plugin module reference details the commonly used partial ELARASchema generation functions which can be merged into a solution, below are a selection of available plugins:

  • DataSourcePlugin: Utility plugin to simplify creating diagnostic content for DataSources.
  • TablesPlugin: Create a diagnostic page showing a the values within one ore more Tables.
  • OptionsPlugin: Create page to facilitate manual and sensitivity Option interaction in the UI
  • XeroPlugin: Create a streaming DataSource for the Xero API with supporting diagnostic pages.
  • MLFunctionPlugin: Create a results pages for diagnosing an MLFunction.
  • SimulationPlugin: Construct a pipeline combining simulation results for the specified structure entity.

Create deployment#

To start in preparation for later steps, we need to create a deployment configuration in the project using the edk add deployment command - obviously replace 'YOUR_SERVER_URL' with the url provided to you from support@elara.ai:

$ edk add deployment --server YOUR_SERVER_URL --name demo --warnโœ” add deployment succeeded

Now we can start to define the project components to build a solution.

Create application#

Now that we have run through the difference between an Expression and Plugin, using the edk add plugin command we can add a plugin asset to allow us to define some important administrative components such as a UI application for viewing results, a user, and registration of environment variables.

$ edk add plugin --name "Application" --def_dir src/pluginโœ” add plugin succeeded

we can add these to the plugin definition, and leverage one of the EDK lib plugins:

import * as ELARA from "@elaraai/edk/lib"
export default ELARA.Schema(    // an ApplicationPlugin is a function that generates predefined partial     //  ELARASchema that create a secure Application (web app) component     //  so that we can inspect all the tables in our solution.    ApplicationPlugin({        name: "Gift Shop",        users: [            // a super user is a function to generate an ELARASchema User             //  component, with predefined permissions            SuperUser({                email: 'support@elara.ai',                name: 'Admin',                // an environment Expression returns the value of                //  a registered environment variable                password: Environment('ADMIN_PASSWORD'),            })        ],        environments: [            // an EnvironmentVariable registers a named environment variable             //  in the context of the solution            EnvironmentVariable({ name: 'ADMIN_PASSWORD' })        ],    })   )

As defined in the Enterprise schema overview, a plugin is a project asset the produces a whole ELARASchema.

Since we registered the ADMIN_PASSWORD environment variable, in order to use our application we will also add a super-secret password to our system, along with the EDK server client id and secret provided by support@elara.ai:

$ EDK_CLIENT_ID=...CSpZaTV98dNDd...$ EDK_CLIENT_SECRET=...amhHxAydSE27g...$ ADMIN_PASSWORD=...601a9e88da8ee99cf082...

We now have a secure web app defined, along with credentials based on an environment variable storing a super-secret admin password.

Create datasources#

In typical solutions you should expect to integrate dynamic datasources, such as databases and api's. As communicated earlier, for the sake of simplicity we will build a static solution based on .csv files. We will assume therefore that the data is available in the following files:

  • sales.csv: Individual gift box sale line items.
  • purchases.csv: Individual supply purchase line items.
  • covid.csv: Daily cumulative covid cases for the surrounding area
  • rosters.csv: Individual roster entries (i.e. shift plans).
  • shifts.csv: Individual timesheet entries (i.e. shift execution).

Initially we will base our solution on the sales and procurement processes for the Gift Shop as defined in the structure instance overview. Later, once we have some initial results we can add more complexity to the solution by adding the payroll process, as well the labour aspect of of sales (labour use).

Add datasources assets#

Assuming these files are in a project directory we created called files, we can use the edk add datasource command to let the EDK to create a datasource for each file in the directory:

$ edk add datasource csv --file files --def_dir src/sourceโœ” add csv datasource succeeded

The above command will have created five datasources, for example one will refer to the sales data:

import * as ELARA from "@elaraai/edk/lib"
export default ELARA.CsvSourceSchema({    // the edk generated the name 'Sales' for our datasource    name: "Sales",    // the edk also defined the local file location, on deployment this     //  file will be packaged in the deployment artefact    path: ELARA.Const("files/sales.csv"),})

Detect datasource expressions#

At the moment the full data sources have not been defined and require further definition of the primary_key expression, as well as the output expressions. Fortunately we if we like we can autmoate this process with the edk detect command which will discover the required content:

$  edk detect --defaults --assets covid.source purchases.source rosters.source sales.source shifts.sourceโœ” detect datasource succeeded

Now each datasource also defines expressions for the outputs and a valid primary_key, for example the purchases datasource now looks like this:

import * as ELARA from "@elaraai/edk/lib"
export default ELARA.CsvSourceSchema({    name: "Purchases",    path: ELARA.Const("files/purchases.csv"),    // the edk has detected a valid primary key as the SupplyID, this     //  needs to be string type Expression    primary_key: ELARA.Variable("SupplyID", 'string'),    // the edk has detected the Expression for each column in the csv, notice the EDK     //  implemented as a Parse expression in case for example some values in the     //  csv are poorly formed (such as dates).    selections: {        SupplyID: ELARA.Parse(ELARA.Variable("SupplyID", 'string')),        Item: ELARA.Parse(ELARA.Variable("Item", 'string')),        Qty: ELARA.Parse(ELARA.Variable("Qty", 'integer')),        Date: ELARA.Parse(ELARA.Variable("Date", 'datetime')),        Price: ELARA.Parse(ELARA.Variable("Price", 'float')),    },})

Notice that even though the the daily covid cases file had no identifier column, the primary_key expression was detected as a Print expression of the Date, Print converts any other type expression to a string expression.

import * as ELARA from "@elaraai/edk/lib"
export default ELARA.CsvSourceSchema({    name: "Covid",    path: ELARA.Const("files/covid.csv"),    // the edk couldn't find a string value to use as the key, given that the date     //  is unique for each row in the csv, the EDK chose to instead use the Print expression     //  to convert the date to a string for the key.    primary_key: ELARA.Print(ELARA.Variable("Date", 'datetime')),    selections: {        Date: ELARA.Parse(ELARA.Variable("Date", 'datetime')),        CovidCases: ELARA.Parse(ELARA.Variable("CovidCases", 'integer')),    },})

Update datasource components#

Now that we have a collection of assets, we can use the update command to generated the related gen files, which will generated usable component files in the gen directory:

$  edk updateโœ” update succeeded

The CLI reference contains a detailed reference of the relevance of gen files.

Add datasource plugin#

Now that we have datasources and generated components, we can add a diagnostic page to the UI to view the outputs of the datasources. We can do this by adding a DataSourcePlugin to the application that was created earlier:

import covid from '../../gen/covid.source';import purchases from '../../gen/purchases.source';import rosters from '../../gen/rosters.source';import sales from '../../gen/sales.source';import shifts from '../../gen/shifts.source';
export default ELARA.Schema(    ApplicationPlugin({        name: "Gift Shop",        schemas: {            // the DataSourcePlugin is a function that creates a partial solution             //  containing application an page showing the data associated with each             //  datasource input.            DataSources: DataSourcePlugin({                datasources: [covid, purchases, rosters, sales, shifts]            })        },        // ...    }))
Note

The DataSourcePlugin is inserted into the schemas property of the ApplicationPlugin, this will create a menu item and diagnostic page for each datasource, and insert them into the application. All other underlying ELARASchema components created by the DataSourcePlugin will also be embedded into the output and need no further definition. Also note that we have imported some datasources from the gen directory, these are the generated components that were created when we ran edk update.

If we wanted, at this point we could run edk build, followed by edk deploy demo to deploy the application in its current state. At this stage the application would allow us to view the data from our datasources:

Create pipelines#

Now that we have some data for the business, we would like to define the business processes, resources and agents to optimise. But first we will post-process our data to ensure it fits well into our defined structure and the suitable discretisation. Rather than creating individual pipelines to process the data, we will leverage a TimeAggregatePlugin to create the following:

  • WeeklyPurchases: aggregated (per supply) purchases and amount per week
  • WeeklyCovid: aggregated covid cases per week
  • WeeklySales: aggregated (per product) sales qty per week
  • HourlySales: aggregated (per product) sales qty per hour

Add timeaggregate plugin#

Our data defines individual records occuring overtime (such as sales, purchases etc), in order to achieve consistent descretisation within processes we should aggregate the data into appropriate time intervals. We can use a plugin for this:

$ edk add plugin --name "Time Aggregate" --def_dir src/pluginโœ” add plugin succeeded

And now we can define the intended time aggregations within the TimeAggregatePlugin:

import * as ELARA from "@elaraai/edk/lib"
import { CollectDictSum, Multiply, Sum, TimeAggregation, TimeAggregatePlugin } from "@elaraai/edk/lib"
import covid from '../../gen/covid.source';import purchases from '../../gen/purchases.source';import sales from '../../gen/sales.source';import time_intersection from '../../gen/time_intersection.plugin';
export default ELARA.Schema(    // the TimeAggregatePlugin is a function that creates a partial solution     //  containing pipelines and tables with aggregated data.    TimeAggregatePlugin({        inputs: {            // collect the sales table records into a collection of            //  total sales per product per week            TotalWeeklySales: TimeAggregation({                table: sales.output,                value: sales.output.fields.Date,                unit: 'week',                aggregations: {                    TotalQty: CollectDictSum(sales.output.fields.Product, 1n),                }            }),            // collect the sales table records into a collection of total             //  sales per product per hour            TotalHourlySales: TimeAggregation({                table: sales.output,                value: sales.output.fields.Date,                unit: 'hour',                aggregations: {                    TotalQty: CollectDictSum(sales.output.fields.Product, 1n),                }            }),            // collect the purchases table records into a collection of total             //  purchases per supply per week            TotalWeeklyPurchases: TimeAggregation({                table: purchases.output,                value: purchases.output.fields.Date,                unit: 'week',                aggregations: {                    TotalQty: CollectDictSum(purchases.output.fields.Item, purchases.output.fields.Qty),                    TotalAmount: CollectDictSum(purchases.output.fields.Item, Multiply(                        purchases.output.fields.Qty,                        purchases.output.fields.Cost                    ))                }            }),            // collect the covid table records into the total covid cases per week            TotalWeeklyCovid: TimeAggregation({                table: covid.output,                value: covid.output.fields.Date,                unit: 'week',                aggregations: {                    TotalCovidCases: Sum(covid.output.fields.CovidCases),                }            }),            // collect the covid table records into the total covid cases per week            TotalDailyCovid: TimeAggregation({                table: covid.output,                value: covid.output.fields.Date,                unit: 'day',                aggregations: {                    TotalCovidCases: Maximum(covid.output.fields.CovidCases),                }            })        }    }))

Update pipeline components#

Before we use the timeaggregate plugin we need to update the project to generate the related gen files:

$  edk updateโœ” update succeeded

Add process pipelines#

Now that we have all the aggregated data we can create specific data tables for each process in the business. For simplicity, rather than creating multiple pipeline assets we will use a plugin:

$ edk add plugin --name "Structure Pipelines" --def_dir src/pluginโœ” add plugin succeeded

And now we can define the pipelines to prepare the data for the structure entities, each pipeline will be created with the PipelineSchema function which creates a partial ELARASchema containing a pipeline and associated tables. We will merge the partial pipeline ELARASchema outputs together with the mergeSchema function:

import * as ELARA from "@elaraai/edk/lib"
import { AggregateOperation, DateKey, JoinOperation, Mean, Minimum, PipelineSchema, mergeSchemas } from "@elaraai/edk/lib"
import purchases_source from '../../gen/purchases.source';import sales_source from '../../gen/sales.source';import time_aggregate from '../../gen/time_aggregate.plugin';
// for convenience get the tables produced from the pluginconst total_daily_covid = time_aggregate.pipeline.TotalDailyCovid.output_tableconst total_hourly_sales = time_aggregate.pipeline.TotalHourlySales.output_tableconst total_weekly_covid = time_aggregate.pipeline.TotalWeeklyCovid.output_tableconst total_weekly_purchases = time_aggregate.pipeline.TotalWeeklyPurchases.output_tableconst total_weekly_sales = time_aggregate.pipeline.TotalWeeklySales.output_table
//same as above, get the datasource tablesconst sales = sales_source.outputconst purchases = purchases_source.output
export default ELARA.Schema(    // the mergeSchemas function merges one or more ELARASchemas together into a single ELARASchema    mergeSchemas(        // the PipelineSchema returns a partial ELARASchema containing a Pipeline and associated Tables        PipelineSchema({            name: 'Products',            input_table: sales,            operations: [                // make a table of products, by aggregating the sales with the key being                 //  the product name, we will also get some price statistics                 AggregateOperation({                    group_field: sales.fields.Product,                    group_value: sales.fields.Product,                    aggregations: {                        Price: Mean(sales.fields.Price),                        MinPrice: Minimum(sales.fields.Price),                        MaxPrice: Minimum(sales.fields.Price),                    }                }),            ]        }),        // make a table of purchase items, by aggregating the purchases with the key being         //  the item name, we will also get some cost statistics                 PipelineSchema({            name: 'Supplies',            input_table: purchases,            operations: [                AggregateOperation({                    group_field: purchases.fields.Item,                    group_value: purchases.fields.Item,                    aggregations: {                        Cost: Mean(purchases.fields.Cost),                        MinCost: Minimum(purchases.fields.Cost),                        MaxCost: Minimum(purchases.fields.Cost),                    }                }),            ]        }),        // make a table of sales, by joining hourly labour and daily         //  covid cases into the hourly sales         PipelineSchema({            name: 'Sales',            input_table: total_hourly_sales,            operations: [                // the JoinOperation performs a relational join from based on key Expressions, in                //  this case we will use the StdLib function to create a DateKey for the relevant                 //  interval unit                JoinOperation({                    source_table: total_daily_covid,                    source_key: DateKey(total_daily_covid.fields.Date, 'day'),                    target_key: DateKey(total_hourly_sales.fields.Date, 'day'),                    join_type: 'Left'                })            ]        }),        // make a table of purchases, by joining weekly sales and daily         //  covid cases into the hourly sales         PipelineSchema({            name: 'Purchases',            input_table: total_weekly_purchases,            operations: [                JoinOperation({                    source_table: total_weekly_covid,                    source_key: DateKey(total_weekly_covid.fields.Date, 'week'),                    target_key: DateKey(total_weekly_purchases.fields.Date, 'week'),                    join_type: 'Left'                }),                JoinOperation({                    source_table: total_weekly_sales,                    source_key: DateKey(total_weekly_sales.fields.Date, 'week'),                    target_key: DateKey(total_weekly_purchases.fields.Date, 'week'),                    join_type: 'Left'                })            ]        }),    ))

Update pipeline components#

As before, before we use the process pipelines we need to update the project to generate the related gen files:

$  edk updateโœ” update succeeded
Important

For reusable solutions we recommend that datasource and pipeline steps are applied as individual projects per datasource. The advantage of seperation is the creation of partial ELARASchema artefacts with datasources as well as cleansing, processing and preparation for structure definition, as well as diagnostic pages in a UI.

Add diagnostic pages#

Now that we have everything ready for structure definition, we can add diagnostic pages to the UI to view the outputs of our piplines. We can do this by adding a PipelinePlugin to the application that was created earlier:

// ...import structure_pipelines from '../../gen/structure_pipelines.plugin';import time_aggregate from '../../gen/time_aggregate.plugin';
export default ELARA.Schema(    ApplicationPlugin({        name: "Gift Shop",        schemas: {            // ...                // the PipelinePlugin is a function that creates a partial solution             //  containing application an page showing the data associated with each             //  pipeline input.            TimeAggregates: PipelinePlugin({                pipelines: time_aggregate.pipeline            }),            StructurePipelines: PipelinePlugin({                pipelines: structure_pipelines.pipeline            })        },        // ...    }))

If we now ran edk build, followed by edk deploy demo to deploy the application in its current state we would see more content, note particularly that the visible data now includes our collection tyes expression values:

Create structure#

Now that we have some data for the structure, we can define the business processes, resources and agents to optimise either as instances or mappings as described in the structure instances overview.

Create scenarios#

Wefore we create some structure entities, we should create some scenarios as described in the structure properties overview. We will create two scenarios:

$ edk add scenario --name "Baseline" --def_dir src/scenarioโœ” add scenario succeeded$ edk add scenario --name "Optimised" --def_dir src/scenarioโœ” add scenario succeeded

This will create two scenario definitions, we will assume they are independant and therefore won't change the definition:

import * as ELARA from "@elaraai/edk/lib"
export default ELARA.ScenarioSchema({    name: "baseline",})

Create money resource#

First we will create a reosurce for the money in the business, using the following command:

$ edk add structure resource --concept "Money" --def_dir src/structureโœ” add resource succeeded

Now that we have an empty money resource, assuming the enterprise has a single account called 'Cheque" we will a property for the balance over time.

import * as ELARA from "@elaraai/edk/lib"
import { Temporal } from "@elaraai/edk/lib"
// the ResourceStructureSchema creates a partial schema containing a resource component and associated tablesexport default ELARA.ResourceStructureSchema({    // a structure entity always needs a concept    concept: "Money",    instances: {        // the instance name - or in other words the marker        Balance: {            properties: {                // the balance will change over time, we want the resolution                //  of the output to be hourly                Balance: Temporal({                    initial_value: 0,                    sampling_unit: 'hour',                })            }        }    }})
Important

As noted in the properties overview ELARA uses the structure entities and properties to simulate the distribution over outcomes of properties. One aspect of this is the probabilistic nature of a Temporal property, which is really capturing the probability distribution of the balance of money over time. While most often for reporting purposes we only care about the mean value of the distribution, it's important to note what's behind the value.

Create product resource#

Next we will create a resource for products:

$ edk add structure resource --concept "Products" --def_dir src/structureโœ” add resource succeeded

Given that we want a product per row in the products table, we will define this resource with a mapping:

import * as ELARA from "@elaraai/edk/lib"
import { Multiply, Option } from "@elaraai/edk/lib"
import baseline from "../../gen/baseline.scenario"import structure_pipeline_plugin from "../../gen/structure_pipelines.plugin"
const products = structure_pipeline_plugin.pipeline.Products.output_table
export default ELARA.ResourceStructureSchema({    concept: "Products",    mapping: {        input_table: products,        properties: {            Price: Option({                default_value: products.fields.Price,                manual: [{                    scenario: baseline,                    min: Multiply(products.fields.MinPrice, 0.9),                    max: Multiply(products.fields.MaxPrice, 1.1),                }],                sensitivity: [{                    scenario: baseline,                    min: Multiply(products.fields.MinPrice, 0.9),                    max: Multiply(products.fields.MaxPrice, 1.1),                }]            })        }    }})
Important

As noted in the properties overview, ELARA uses an Option to describe the decisions in a business, with behaviour determined by the kinds of an Option.

For example, expressing the Option as a manual kind will enable the price for each product to be manually changed by the user or an external system. In other words users may manually explore how changing the prices would change likely business outcomes.

Expressing the Option as a sensitivity kind tells ELARA to generate a Table describing how much each product price affects the business outcomes. In other words ELARA will tell the user which prices should be changed in order to acheive optimal business outcomes.

Later we will explore an automatic option which would tell us not only which prices to change for which products, but what the price should be at a particular time in order to acheive optimal business outcomes.

Create supply resource#

Next we will create a resource for supplies:

$ edk add structure resource --concept "Supplies" --def_dir src/structureโœ” add resource succeeded

As before for the products, we will define this resource with a mapping:

import * as ELARA from "@elaraai/edk/lib"
import { Multiply, Option } from "@elaraai/edk/lib"
import baseline from "../../gen/baseline.scenario"import structure_pipeline_plugin from "../../gen/structure_pipelines.plugin"
const supplies = structure_pipeline_plugin.pipeline.Supplies.output_table
export default ELARA.ResourceStructureSchema({    concept: "Supplies",    mapping: {        input_table: supplies,        properties: {            Cost: Option({                default_value: supplies.fields.Cost,                manual: [{                    scenario: baseline,                    min: Multiply(supplies.fields.MinCost, 0.9),                    max: Multiply(supplies.fields.MaxCost, 1.1),                }],                sensitivity: [{                    scenario: baseline,                    min: Multiply(supplies.fields.MinCost, 0.9),                    max: Multiply(supplies.fields.MaxCost, 1.1),                }]            })        }    }})

Create sales process#

Now that we have a money resource, we will build on the structure by adding the sales process:

$ edk add structure process --concept "Sales" --def_dir src/structureโœ” add process succeeded

We can define the sales process as a mapping from the plugin sales output:

import * as ELARA from "@elaraai/edk/lib"
import {     DayOfWeek, DictType, GetProperties, GetProperty, Hour,     IsNotNull, IsNull, MLFunction, Month, MultiplyDict, Null,     ProcessMapping, Property, Replace, WeekOfMonth     } from "@elaraai/edk/lib"
import metrics from "../../gen/metrics.structure"import money from "../../gen/money.structure"import products from "../../gen/products.structure"import structure_pipeline_plugin from "../../gen/structure_pipelines.plugin"
const sales = structure_pipeline_plugin.pipeline.Sales.output_table
export default ELARA.ProcessStructureSchema({    concept: "Sales",    mapping: ProcessMapping({        input_table: sales,        date: sales.fields.Date,        properties: {            // we should create some features that might be relevant            //  to demand, for example sales might vary throughout the day            HourOfDay: Hour(sales.fields.Date),            DayOfWeek: DayOfWeek(sales.fields.Date),            WeekOfMonth: WeekOfMonth(sales.fields.Date),            MonthOfYear: Month(sales.fields.Date),            // get the current account balance - GetProperty will get             //  a number value at the time of the sale            CurrentMoneyBalance: GetProperty({ property: money.properties.Balance }),            // get the current weekly sales qtys            CurrentWeeklySalesQty: GetProperty({ property: metrics.properties.WeeklySalesQty }),            // get the product prices - GetProperties will get the price            //  from all product instances            ProductPrices: GetProperties({ property: products.properties.Price }),            // set the covid cases            TotalCovidCases: Replace(sales.fields.TotalCovidCases, Null('integer'), 0n),            // we don't know what drives the number of sales, so we will let ELARA estimate            //  based on historic values            TotalQty: MLFunction({                value: sales.fields.TotalSalesQty,                train: IsNotNull(sales.fields.TotalSalesQty),                predict: IsNull(sales.fields.TotalSalesQty),                features: {                    HourOfDay: Property("HourOfDay", "integer"),                    DayOfWeek: Property("DayOfWeek", "integer"),                    WeekOfMonth: Property("WeekOfMonth", 'integer'),                    MonthOfYear: Property("MonthOfYear", "integer"),                    TotalCovidCases: Property("TotalCovidCases", "integer"),                    ProductPrices: Property("ProductPrices", DictType("float")),                }            }),            // the total amount is just each price * qty             TotalAmount: MultiplyDict(                Property("TotalQty", DictType("integer")),                 Property("ProductPrices", DictType("float"))            )        },        // the events will set the values of external properties        events: {            // add the sum of the TotalAmount's per product to the             //  current money balance            AdjustMoney: {                    property: money.properties.Balance,                    value: Add(                        Property("CurrentMoneyBalance", "float"),                         AddAll(Property("TotalAmount", DictType("float")))                    ),            },            // add the amount per product to the week amount per product            AdjustWeeklySalesQty: {                property: metrics.properties.WeeklySalesQty,                value: AddDict(                    Property("CurrentWeeklySalesQty", DictType("float")),                     Property("TotalAmount", DictType("float"))                ),            }        }    })})
Important

As noted in the properties overview, ELARA uses an MLFunction to simulate the distribution over values of a property from it's historic values and input features. Under the hood ELARA creates automatically tuned machine learning functions based on the Property (or output) type.

Given that an MLFunction can predict values based on any feature type, they become an important part of rapid prototyping since a business activity can be modelled without needing to understand the detail of what is actually undertaken. To avoid creating a black box, creating an MLFunction also creates related disgnostic information such as feature ranks, estimation error, and samples.

Conventiently, MLFunction properties are just like any other Property, and therefore it is common to chain one or more together mixed with any other kind of property such as OptionProperty, ValueProperty, other kinds of FunctionProperty which may also relate to a TemporalProperty.

Create an objective#

In order to let ELARA do it's job we will also define the overall objective for the business, by adding a small change to the existing Money resource:

import * as ELARA from "@elaraai/edk/lib"
import { Temporal } from "@elaraai/edk/lib"
// the ResourceStructureSchema creates a partial schema containing a resource component and associated tablesexport default ELARA.ResourceStructureSchema({    // a structure entity always needs a concept    concept: "Money",    instances: {        // the instance name - or in other words the marker        Balance: {            properties: {                // the balance will change over time, we want the resolution                //  of the output to be hourly                Balance: Temporal({                    initial_value: 0,                    sampling_unit: 'hour',                    objective: ELARA.Property("Balance", "float")                })            }        }    }})

Add diagnostic pages#

Now that we have a structure definition, we can add diagnostic pages to the UI to view the outputs. To get sufficient visibilty of the structure we will make the following changes:

  • Resources: adding a page that allows us to manually change the product price, view it's sensitivity and show the resulting money balance.
  • Processes: add a diagnostic page for the `MLFunction` in the sales process to understand the accuracy of predictions.

The changes will be made to the existing ApplicationPlugin using the OptionsPlugin and the MLFunctionPlugin:

// ...import sales_structure from '../../gen/sales.structure'import money_structure from '../../gen/money.structure'import products_structure from '../../gen/products.structure'import baseline from '../../gen/baseline.scenario'
export default ELARA.Schema(    ApplicationPlugin({        name: "Gift Shop",        schemas: {            // ...                // the MLFunctionPlugin is a function that creates a partial solution             //  with a page showing diagrnostic information for MLFunction's including            //  the feature importance and sample accuracy.            MLFunctions: MLFunctionPlugin({                func: sales_structure.properties.TotalQty.function,                prepend: 'MLFunction'            }),            // the OptionsPlugin is a function that creates a partial solution             //  with a page allowing option manual changes, showing sensitivity            //  as well as optionally an ouput temporal resource value over time.            Options: OptionsPlugin({                name: "Price Option",                manual: baseline,                sensitivity: baseline,                output: money_structure.properties.Balance,                options: {                    ProductPrices: products_structure.properties.Price                }            })        },        // ...    }))

Summary#

Now that we have run through how to develop a solution, more detailed usage of the EDK can be found in the API reference.