Microsoft Teams Bot - How to achieve the impossible? - Part 2

March 13, 2018 / Bots, AI, Cognition January 23, 2020

Implementation of the bot

On previous blog entry there was overall description of the bot functionalities, and now we can go with the implementation. To achieve that we need to make multiple parts of that puzzle working together, and the first step to do so is to log in into office.

Logging in to Office

To log in to Office 365 we need some authorization endpoint on our side do get token, and the Azure Active AD application that will define, what kind of permissions application will get, when user give it permission. There is no need to once again explain the process – everything is explained right there.

What is important however, is that, that type of the application supposed to be “Web app/API”.

When application is created it is important to write down Application ID and to create some Key – it will be required for application to work. Also – you need to define your authentication endpoint URL in the app to make sure, that using your AD Application user can log in only to your web (bot) application.

When you trigger action that has to start logging mechanism, you need to send some data to the Azure Authorization Endpoint to get code, and then – authorization token. Flow is explained right there and the example of implementation is nicely presented here.


How should you greet the bot? You should say just “hi”? Or maybe “hello”? And what if you want to trigger some particular action? Should you base on some keywords? Regular expressions or maybe simple “contains” in string? But what if user means something quite different, but those keywords will be present as well?

To help with that comes LUIS, which is Language Understanding Intelligent Service. It (he?) can based on provided utterances guess with some level of certainty, what was the intention of that message.

Let’s take a look for example Intent, in this case – for starting whole dialog about notifications. That is the moment when user has to log in to office etc. At start, LUIS asks about some examples that user might say to trigger some particular task.

But one thing at the time. First, user needs to create new intent for the particular action, by simply typing its name:

When Intent is created, user needs to type some example utterances, that may trigger this action:

In this example they are as follows:

When the utterances are added, he needs to be trained before will be able to understand anything. It can be done by simply clicking “Train” button on the upper right side of the screen:

When training is complete, application needs to be published, that can be done from “Publish” tab.

So without any surprise, when user types “log time on teamwork”, this action will be triggered:

Let’s try something more difficult. What about “Dear Bot, could you please notify me when events in my calendar ends, so I could with your assistance log time on Teamwork?”:

In this case confidence is even greater. Someone might say, that some phrases are duplicated in my question, and based on that Intent was selected. That’s most certainly true! So let’s try to ask without using any of the words included in base utterances.

What about “dear bot, let me know, when happening from my calendar ends, so I could save that information on time tracking service”.

It still works! How cool is that? In some cases bot might understand something wrong, but that’s not a problem – as you can see on the upper screenshots, opened dropdown is about assigning utterance to the intent. So we can always point, that that particular utterance should be connected with some intent. Bot will train on that, and will remember 🙂

Possibilities of LUIS are even greater. He can recognize important part of the messages (like numbers, locations etc.), and the it can be used as the parameter, but it’s not important in our case right now.

And how to make use of that? It’s surprisingly easy. You just need two values – LUIS App Id (can be found in the application settings), and API Key (it can be found in the profile settings, or simply you can get it from the URL (it’s that guid).

Then, just create class that derives from LUISDialog class, add some attributes and you are good to go!

Now, you can create methods with the attribute that has to be the same, as LUIS intent on the web panel:

When bot will receive message, that LUIS recognized – that action will be triggered.


So we have now token, bot that will respond to particular messages and now we need to use Bot to gather information that is important from time log perspective. In this case we need to get info about project, task list and the task itself. It sure can be done manually, by parsing string, but why to implement something from scratch, when you can just use ready (well, almost ready) solution!

FormFlow is really interesting tool for such things. It enables bot to gather information that are defined in the model. And basically that model is all that we need – FormFlow will take care about everything else. So, for example, let’s say that we want to ask user about his name, surname, gender and birth date. It’s really simple example but will be also great example to explain.

First, we need some model. It differs from any other models by Serializable attribute, and BuildForm method.

public enum Gender
    Male = 1,
    Female = 2
public class UserDetails
    public string Name { get; set; }
    public string Surname { get; set; }
    public Gender Gender { get; set; }
    public DateTime BirthDate { get; set; }
    public static IForm<UserDetails> BuildForm()
        return new FormBuilder<UserDetails>().Message("Please provide some details about yourself.").Build();

And that’s all. Model is extremely simple. Then you need to use that model in dialog:

var enrollmentForm = new FormDialog(new UserDetails(), UserDetails.BuildForm, FormOptions.PromptInStart);
context.Call(enrollmentForm, Callback);

So let’s see, how bot will react to that:

Isn’t that cool? All we made is simple model, and that’s all! We didn’t create any buttons, we didn’t write any text other than welcome message, and it handled that for us. What’s more, it also parsed and validated values (10th of july 1980 isn’t the most typical date format, isn’t it?). But that’s not all! Let’s type “help” during conversation and see, what happens:

We can select values in different ways (click a button, type value, or give index), be can go back or quit, or see status.

Just great! During previous conversation at the end of it I typed, that everything is ok. What if I said, that is after all wrong?

And all of that using only simple model.

That’s for the example. Let’s see our case.

For our purposes we can’t just use enums, as project and tasks can differ in time and between the users. So we need to dynamically load those values, and what’s more – next values depends on the previous ones. Luckily, that FormBuilder is quite flexible.

We can call our remote endpoint when field is created, and add received values as options:

return new FormBuilder<LogTimeModel>()
.Field(new FieldReflector<LogTimeModel>(nameof(ProjectName))
.SetActive((state) => true)
.SetDefine(async (state, field) =>
    if (state.ProjectName == null)
        var projects = (await this.TeamworkConnector.GetAllProjects());
        foreach (var project in projects)
            field.AddDescription(project.ProjectId, project.ProjectName).AddTerms(project.ProjectId, project.ProjectName);
    return true;

In state object there are context with currently selected values. Thanks to that we can download required data in right moment. When everything is completed, we can process result values. That’s the place, where data is saved on Teamwork.

And that’s all, case is closed.

In the part 3 of that blog entry there will be example of application monitoring, as well as solutions for (not so) common problems I’ve came across during bot application development, that may become handy.

Microsoft Teams Bot – How to achieve the impossible? – Part 1
Microsoft Teams Bot – How to achieve the impossible? – Part 3
Microsoft Teams Bot – How to achieve the impossible? – Part 4