With the release of Enate AI's latest offering - AI Analyst - we're taking a significant step forward to let you seamlessly integrate AI-driven activities throughout your business process.
We're partnering with Microsoft on this to use the power of their very latest OpenAI technology right at the heart of things. So if you can ask OpenAI to perform a task, with EnateAI Analyst you can embed that to run automatically as part of your business process flow.
You can add AI Analyst Actions throughout your cases and ask it to analyse documents which you supply it. You can massively reduce the time spent having to wade through huge data files performing intricate analysis, freeing up time for more valuable work.
The possibilities here are almost endless, and the power you've got at your fingertips is matched only by how simple it is to set up. There's no coding and you don't have to change a thing - just tell the system what the business rules are to run an analysis task and it will get on with it.
An important thing to note here: For the moment, this feature is being released in BETA only. As such, it should not be used yet for full production purposes just yet. You can however, start to test it out with real scenarios.
Here's how you can get started setting up AI Analyst
Adding AI Analyst into your business processes is very simple to set up. Once you've switched on the 'AI Analyst' integration in Builder's Marketplace section, any time you want to create a new AI Analyst action to perform a specialist analysis activity, the steps are as follows:
Create a new AI Policy in the AI Analyst Configuration section of System Settings in Builder
Test this policy with sample data until you're happy with the output, then Set Live.
Add 'AI Analyst' actions into your case process, linking this to your new AI Policy. (Note: You will need to add a manual action directly after the AI Analyst action)
Creating a new AI Policy is simple - no code is required, you can simply write out the business rules / logic / policy for the activity in normal business language and the AI will understand it. You can easily get started by simply porting across the details of your business policy direct into an Enate AI Policy.
Take a look at some sample policy prompts to see what a policy might look like..
Go to the Marketplace section of Builder and filter down to 'AI Analyst'. Activate the EnateAI - AI Analyst Integration
Go to the 'AI Analyst Configuration' section of System Settings, and click to 'Create a Policy'. This will display an AI Policy for you to start to fill in with details of the analysis activity you want AI to undertake for you. Remember, you can just write this in normal business terms (see the prompts section for examples of this).
Here is the information you can define when setting up a new AI Policy:
Name - give your Policy a sensible name so it can easily be identified from a list of other Policies, e.g. 'Invoice / Credit Note Reconciliation.
Input File Tags - At runtime your AI will analyse one or more documents as its input. You can test with sample ones while you build, but at runtime you need to tell the system which files to grab. Setting the file Tags here tells the AI 'at runtime, grab the files in the Action which have these tags, and use them as your source for analysing. Examples might be: 'Bank File', 'HR Update', 'State Tax Rules'.
Output File Tag - If your policy instructions ask for output to be provided in a file, you may want to tag that output file too, for easier use by other systems downstream. Example 'AI Reconciled'
AI Persona - For best results when creating a policy with instructions prompts, it's good to give the AI as much context as you can - one important way to do this is to say what kind of person they should act as, e.g. 'Do this analysis activity as if you were a Bank clerk', or an HR executive, or an Accounts Payable expert. You should either define a new person here for your policy, or pick from the existing list if the relevant persona has already been defined.
Instructions for AI - This is where the details of your instructions to the AI will go. This can simply be a copy/paste of your company policy for carrying out the activity, the rules and regulations for what to do, and how you'd like to receive the output.
AI Creativity Level - This will produce subtly different output depending on the setting. you can choose to have a play around wither depending on what type of analysis you're asking for here. It defaults to a 'balanced' setting, but there's options to make the responses more creative or more precision-focused.
A well-defined persona for your AI Analyst activity helps the AI do a better job when analysing and returning data to you. If the persona you're looking for isn't in the list to choose from, you should define one for this policy. At runtime, the AI will use this as input along with the more detailed instructions when determining what to do.
Here's whether the main part of the input instructions to the AI get defined. Remember you don't need to be writing this as code, in fact it works much more effectively if you don't. If you've got existing rules and regulations which define that task, paste them in here and test your output.
When you're writing instructions that involve heavy reference of e.g. Excel sheet columns, you'll obviously have to write something adequately detailed and precise which refers to them accurately, a good guide is still to write it in a way that you would be explaining it to someone you wanted to carry out the activity (example as below shows detailed column references but then a more human "it won't be a perfect match but it should appear in there somewhere".
Be clear about exactly what you want the AI to do, and how you'd like to receive your output. For examples and notes on how to write good AI prompts for activities such as this, check out this section.
While there are no fixed rules on how you format your instructions, if you want to make explicit reference to any of your Input documents, you can do so using a {{FileTag:NAME}} format. For example if you're created a tag called 'Bank', you can refer to this document in your instructions as {{FileTag:Bank}}
For more information and samples on how to write instructions, check out the link below:
Once you're happy with all your policy input settings, the next step is to test it.
You'll be asked to upload a sample document for each input file tag you've specified. Once you've uploaded these you can run your test. Depending on the size of files or complexity of the prompt you've written, this could take a few minutes before you get one, but once you do, you can analyse the results.
If you've requested the output in a certain file format you should see that file as part of your output, otherwise you'll see text response from the AI. If the results show that some tweaking might be needed, you can go back to your policy settings, make some adjustments and test again. Once you're happy though, you can set the policy Live.
Once you've set your new AI policy Live, all you need to do now is add an AI Analyst action into your case flow.
As part of the configuration, set your new AI Policy as the one which this action should use.
Additional Requirement: When adding an AI Analyst Action into a Case flow, you MUST also add a further action immediately after it in your flow which would allow an Agent to review the output of the AI Action. This can be an action of type 'Manual', 'Manual with Peer Review' or 'Approval'. If you do not add an action like this immediately downstream of the AI Action, you will see a validation message when saving the Case process.
While the AI Analyst feature is released in Beta only, it should not be used for full production purposes, although can obviously be used to test the functionality. For now, the current feature can be used with the following known limitations, which will reduce over time as the underlying AI technology beds in:
Multiple output files cannot currently be generated
If functions timeout in Azure, the AI Analyst action's status will remain set as 'In Progress', due to abruptly terminating Azure function (this should not be a problem in production environment)
AI reads a maximum of 100 rows currently, and is dependent on server availability (files of more than 100 rows of data are currently not allowed)
In case if AI fails to make a decision or a tasks defined in policy it will provide an error file (only if you defined that in AI policy prompt)
The following file formats are currently supported: ['c', 'cpp', 'csv', 'docx', 'html', 'java', 'json', 'md', 'pdf', 'php', 'pptx', 'py', 'rb', 'tex', 'txt', 'css', 'jpeg', 'jpg', 'js', 'gif', 'png', 'tar', 'ts', 'xlsx', 'xml', 'zip']
Create content for an investment case document which contains useful information about building a new hotel in a given location.
Hotel Performance File
Hotel Performance
AI Output Hotel File
Business Analyst
You are a experienced business analyst working in the hotel industry. You write detailed assessments and provide recommendations.
Balanced
Perform data reconciliation between two input excel documents, one an expenses report and the other a credit card statement. Create a new file as output with a summary of data from each of the input files.
Expense Report File
Credit Card Statement
Expense Report
Credit Card Statement
Expense Report AI Output
Data Analyst
You are a experienced data analyst working in a company finance department. You help handle expense reports filed by company employees.
Balanced
Here are some specific sample business scenarios where EnateAI's AI Analyst can be used. For each, the business scenario is given, along with sample input and detailed prompt texts to add into 'Instructions for AI' in your Prompts:
Many more sample prompts will be added over the coming weeks and months
Here are some more general examples of AI Prompts from OpenAI to explore. These give a much wider view of the possibilities available with AI prompts beyond focused business situations, but may well be useful to explore 'the art of the possible'.
For detailed guides to best practice for prompt engineering with OpenAI, check out these resources:
Here are some recommendations for creating more effective prompts to get the output you want
Write Clear Instructions - Include details to get more relevant answers
Put working into defining an accurate persona to adopt
Use delimiters to clearly indicate distinct parts of the input
Split Complex tasks into Simpler subtasks - Specify the steps required to complete the task
Specify the desired length of the output
In order to get a highly relevant response, make sure that requests provide any important details or context. Otherwise you are leaving it up to the model to guess what you mean.
How do I add numbers in Excel?
How do I add up a row of dollar amounts in Excel? I want to do this automatically for a whole sheet of rows with all the totals ending up on the right in a column called "Total".
Who’s president?
Who was the president of Mexico in 2021, and how frequently are elections held?
Summarize the meeting notes.
Summarize the meeting notes in a single paragraph. Then write a markdown list of the speakers and each of their key points. Finally, list the next steps or action items suggested by the speakers, if any.
The persona definition goes a long way to helping give context and suggested style to what the AI model will come up. Time spent adding extra layers to the persona is well spent time.
Delimiters like triple quotation marks, section titles, etc. can help demarcate sections of text to be treated differently. Example:
Just as it is good practice in software engineering to decompose a complex system into a set of modular components, the same is true of tasks submitted to a language model. Complex tasks tend to have higher error rates than simpler tasks. Furthermore, complex tasks can often be re-defined as a workflow of simpler tasks in which the outputs of earlier tasks are used to construct the inputs to later tasks. Example:
You can ask the model to produce outputs that are of a given target length. The targeted output length can be specified in terms of the count of words, sentences, paragraphs, bullet points, etc. Note however that instructing the model to generate a specific number of words does not work with high precision. The model can more reliably generate outputs with a specific number of paragraphs or bullet points. Examples:
is better than
Matching entries from two different excel documents, matching bank statement transactions from one system with entries in a Master file from another system. Output should be a list of the transactions (rows in the excels) which do not match.
Bank Transaction File
Master File
Transactions
Masterfile
AI Output Bank File
Bank Clerk
You are a bank clerk who works on file reconciliation queries.
Balanced