Azure Data & AI MVP Challenge - Part 4

logo azure

This week, after seeing the basics of artificial intelligence, the MVP Challenge took me to a bit more advanced terrain of Data & AI compared to different cognitive services that can be used through Azure. Here is a summary of what I learned!

The first part is available here: Part 1
The second part is available here: Part 2
The third part is available here: Part 3
The fourth part: you are here
The fifth part is available here: Part 5

Azure Content Moderator

It all started with a service that I had not yet seen, which allows moderation of images, texts and videos, assisted by the AI of course. This service makes it possible to detect profane or explicit items for example and can even return us a classification on the explicit elements, indicating to us whether we should do a review of the content. In a world where the workforce is often overwhelmed, this kind of service can ease the burden on people to only moderate what is deemed offensive. As everything can be used via an API, it is rather simple to set up, a very complete example to try it out is also available here.

Sentiment analysis

Another great discovery is sentiment analysis using the Text Analytics API. I had seen briefly in a previous lesson how this works, but this time I saw it in a much more concrete example. I saw how to put a system of tails in places with the Azure Queue Storage and with a Azure Function to receive messages, determine whether they are positive, negative or neutral, then transfer depending on the result to the correct queue of the Queue Storage.

Speech Service

Again something that I had seen earlier in the month, but more concretely using different APIs to consume the services. In converting audio to text, as in translating, API usage is much the same. We build the object we need (a SpeechRecognizer to make speech-to-text for example) from a SpeechConfig to connect to the service on Azure, from a AudioConfig to choose an audio file instead of the microphone for example, then all that remains is to consume the result of the method used, which takes care of calling the cognitive service which does the work.

Image and video analysis

I also returned to the analysis of images on different aspects, among others with the API face. To see that the use can be done thanks to SDKs makes the use much more easily accessible. For example, these few lines of code make it possible to detect emotions, glasses, smiles as well as the different identification points (landmarks) on an image:

Video analysis is something that I hadn't seen before and that brings together a lot of the concepts of cognitive services. In summary, the service Azure Video Indexer allows you to take videos and extract scenes from them, face IDs, text recognition, and even transcription and analysis of emotions in audio. In the end, we end up with a ton of information that would allow us, for example, to do research on videos according to their content.

Language Understanding

From this side, I had the chance to see a few more concepts than what I had written last week. For example, there are pre-built domains that we can add to our projects LUIS with a few clicks, which will take care of adding a large selection of intentions and entities to our model without us having to do it ourselves.

I also saw that it was easy to export an application LUIS to a docker, with an option available directly in the export menu. An important thing to know with this though is that there are some limitations to deploying an app. LUIS in a container, all of which are listed in the documentation.

Conclusion

This week was much busier in terms of using the different APIs, but extremely interesting because beyond understanding the concepts seen in the last few weeks, I was able to see the ease of use with the different SDKs available. I don't have a new learning collection this week, instead I updated the ones I already had, so the best way to see the added items would be to check them out:

Collection on Data & AI: See the collection
Collection on the Machine Learning: See the collection
Collection on image analysis: See the collection
Word Analysis Collection: See the collection

Bruno

MVP CHALLENGE

The first part is available here: Part 1
The second part is available here: Part 2
The third part is available here: Part 3
The fourth part: you are here
The fifth part is available here: Part 5

Author: Bruno

By day, I am a developer, team leader and co-host of the Bracket Show. By evening, I am a husband and the father of two wonderful children. The time I have left after all of this I spend trying to get moving, playing and writing video game reviews, as well as preparing material for the Bracket Show recordings and for this blog. Through it all, I am also passionate about music and microbreweries.