Can help with:
- so much data
- so many devices
- not feasible to save to elasticsearch first (real time!)
Stream analysis with reactive programming
In a data driven architecture the processors for the high performance message bus benefit from being written in Rx.
Who use es it?
- Mantis is designed for operational use cases where message guarantee levels vary by jobs. So, some jobs can choose at-most once guarantees while others choose at least once guarantees via Kafka. We are able to saturate the NIC on the servers for operational use case with very little CPU usage.
- Bulit-in back pressure that allows Mantis to seamlessly switch between push, pull or mixed modes based on the type of data sources
- Support a mix of long running perpetual analysis jobs along with user triggered short lived queries in a common cluster
- Since the volume of data to be processed at Netflix varies tremendously by time of day being able to autoscale workers in a job based on resource consumption & the ability to scale the cluster as a whole was a key requirement. None of the existing streaming frameworks provided such support.
- We wanted more control over how we schedule the resources so we can do smarter allocations like bin packing etc. (that also allows us to scale the jobs)
- deep integration with Netflix ecosystem that allows filtering event stream at the source of data.
and listening to some music:
Currently I use the following IIB Tools to automate as much as possible and to be scalable in (multiple 😉 ) seconds, if you have tools I should have a look at please share with me:
Today the cognitive hackathon at IBM IoT ended and my personal result was: bad. I did not make it to the top 3 from 8 teams. That was frustrating and I was disappointed with myself. So …
The task was to build a cognitive car concierge services. Our result was a chatbot that had
– an active voice interface (you could have a dialog without clicking all the time to speak),
– predictive notifications (about your fuel) based on your driving target and the real time vehicle data,
– it also would find the nearest Drive Now Vehicle,
– it could turn on your car (simulated by a hue light, but this could be a call to the CAN Bus of a real car)
– it could interact with google maps to find the best route.
– it had JWT based authentication in every micro service.
– it could schedule calls with your personal call center agent (you don’t need to call, but based on your problem and your customer relationship data and the calendar of the agent the best spot would be found) as well as rescheduling
Well in the end I did not perform (to my own expectations), but at least I learned somethings:
– reality-check: I am better then 1 year ago, but still mediocre compared to the best in the field. I will have to work harder.
– frontend is important: My failure was missing a WOW frontend, I should always keep practicing at least some frontend in the future
– i don’t present very well: I did not do this for some time and I seemed totally of my game, like a 4 year old. I have to get this fixed fast. I have to do more public vlogs and learning videos, go to hackthons where I present.
– it is all show: Technology does not count, as long as you have a great mockup that seems interactive it is enough. If your forced to use technologies you don’t like, just change the game and present a mockup how you would do it there but focus on the tech you like.
– starter kits matter: I definitely have to improve my starter kit to be more extendible and to be better on the eye also the analytics/data component is essential.
– only consider jobs where you have people better than you: Even though I failed miserably overall, it seems from a tech perspective I was one of the better ones. And in my job I focus on learning so I would need to be in a team where most of the people are better than me.
- Now I will just put some EDM on my ears and work hard and next time, which will then be my second hackathon, I will be better.*
Let’s get down to implement the persistent high performance message bus integration (based on the work Blizzard has done) into the starter kit.
Situation: More and more trolls impact free speech online by either creating an artificial public opinion, enraging the crowd with hate comments or misleading us with fake news.
This must stop, as it is currently the biggest threat to free speech and the free internet. If we can not prevent artificial bots from manipulating public opinion the only way is to restrict the free, open and anonymous internet. To me that would be a horrible result as it is the foundation of tremendous innovation in so many areas.
Solution: Use an AI that checks all comments/posts on major social network platforms like facebook. The AI then does not prevent you from posting your opinion but identifies troll behavior as well as “fake stories” and displays the troll and fake probability KPI besides every post . This allows us to still share everything and don’t restrict free speech but gives readers an indication how trustworthy the source is. Furthermore when one clicks on the KPI number, it shows why the number is as it is e.g. a list of news sources pro and against the “fake story”.
How: Well trolls — use inflammatory statements to start a reaction — can be difficult for a human to detect, so solutions like IBM Personality Insights can indicate whether the sentiment is accurately judged depending on the commenter’s personality traits. It can then determine whether the message is ironic, sarcastic or deceptive.
Idea: This could actually be build as a Chrome Plugin for everyone to download even if facebook itself does not introduce such a KPI plugin.
What do you think is this a good idea?