This post is the eleventh, and final, post in documenting the steps I went through on my journey to build an autonomous, voice-controlled, face recognizing drone. There are 10 other posts building up to this one which you can find at the end of this post.
Focus of this post
In this post I will share a video of the complete end-to-end demo and share details of the architecture which sits behind it. I will also share information on what I bought/used to bring this all together and relist all the different software, services and node packages in a single place.
Pulling It All Together
A lot of what we have been doing with this project is humanizing the way we communicate with machines/computers/things. That means talking and observing to drive intelligent interaction rather than using a mouse, keyboard or touch screen.
Our Autonomous Voice-Controlled, Face Recognizing, Drone is a smart drone which showcases, albeit crudely, how interaction with services filled with intelligence is going to evolve. It highlights the importance of cognitive services to the success of organizations in the future.
So with that said take a look at the entire end to end demo in the video below.
This post is the tenth post in documenting the steps I went through on my journey to build an autonomous, voice-controlled, face recognizing drone. There are 9 other posts building up to this one which you can find at the end of this post.
Focus of this post
Up until now we have been mostly working on controlling the drone, using the Microsoft Cognitive Services Face API to identify people and lastly making use of the Microsoft Cognitive Services API to convert text to speech and speech to text.
Ultimately we will have built an intelligent end-to-end IoT solution featuring analytics and visualization. The instructions here can be used to also understand how to get data in from other devices as well!
It’s time. The first of the big data V’s, volume, seems to be coming under control technology wise, even if that technology has not been implemented everywhere. Prices of disks and memory are tumbling and the introduction of technologies, such as Hadoop, are making vast amounts of data cheap to store.
On top of that, numerous technologies have made it easier to process and manage that massive volume of data. In short, the technologies and tools organizations need to deal with the problem of volume are now readily available.
Just when it seemed we could all take a small break from the big data assault on our organizations, since now we can store and process massive amounts of data, the second V, velocity, is gaining in importance – and most organizations are not ready.
Is the future of analytics within your enterprise architecture a rapidly changing agile innovation platform (Lab), separated from operations and broad enterprise audience usage, alongside a consistently slowly changing enterprise analytics platform (factory) that supports operations and a broad enterprise audience? That is the question this blog digs into?
This is an important IT topic at the moment as working out how innovate and modernize at the same time is one of the biggest challenges organizations are facing right now. In the past this has been like changing to new wings while flying the plane which has always been slow and risky.
All hail the data lake, destroyer of enterprise data warehouses and the solution to all our enterprise data access problems! Ok – well, maybe not. In part four of this series I want to talk about the confusion in the market I am seeing around the data lake phrase, including a look at how the term seems to be evolving within organizations based on my recent interactions.