- Forget about follower numbers: Don’t follow anybody back you don’t want to collaborate with.
- Use lists: There are interesting/informative accounts out there – don’t follow them put them on a themed list.
- Follow: only people I want to work/collaborate with or they are from my regional ecosystem.
- Spammy: tweeting more then 5 times a day.
- Trivial: I don’t care about the menu on your flight.
- Selling: 80% of your stuff is just about how great you are.
- Non interactive: you are not interested in conversation.
- Non-doing: you talk to much but do to little.
- Interesting: Work on something interesting and talk about it.
- Collaborative: On this with others (me?).
- Fun: From time to time it is ok to be trivial and tweet about fun things.
I did not follow this myself but I will hold myself to this standard and hope I will contribute to a better experience for everyone. The first thing I did – deleted my automated Industry 4.0 summary on paper.li that spammed my timeline, defollowed accounts that where spamming me and now I’m hoping to having more meaningful conversations.
Feedback aways welcome!
Can help with:
- log everything
- so much data
- so many devices
- not feasible to save to elasticsearch first (real time!)
Stream analysis with reactive programming
In a data driven architecture the processors for the high performance message bus benefit from being written in Rx.
Who use es it?
- Mantis is designed for operational use cases where message guarantee levels vary by jobs. So, some jobs can choose at-most once guarantees while others choose at least once guarantees via Kafka. We are able to saturate the NIC on the servers for operational use case with very little CPU usage.
- Bulit-in back pressure that allows Mantis to seamlessly switch between push, pull or mixed modes based on the type of data sources
- Support a mix of long running perpetual analysis jobs along with user triggered short lived queries in a common cluster
- Since the volume of data to be processed at Netflix varies tremendously by time of day being able to autoscale workers in a job based on resource consumption & the ability to scale the cluster as a whole was a key requirement. None of the existing streaming frameworks provided such support.
- We wanted more control over how we schedule the resources so we can do smarter allocations like bin packing etc. (that also allows us to scale the jobs)
- deep integration with Netflix ecosystem that allows filtering event stream at the source of data.
and listening to some music:
Currently I use the following IIB Tools to automate as much as possible and to be scalable in (multiple 😉 ) seconds, if you have tools I should have a look at please share with me:
- Unit/Integration Testing: irontest, anst-framework(non open source)
- Build: SBB/maven-plugin v9 / SBB/maven-plugin v10
- Code Coverage: IAM2: WebSphere Message Broker Toolkit – ESQL Code Coverage, JUnit for Java
- Docker: iib-bestpractice-runtimes
- Static code analysis: sonar-esql-plugin / sonar-msgflow-plugin
Today the cognitive hackathon at IBM IoT ended and my personal result was: bad. I did not make it to the top 3 from 8 teams. That was frustrating and I was disappointed with myself. So …
The task was to build a cognitive car concierge services. Our result was a chatbot that had
– an active voice interface (you could have a dialog without clicking all the time to speak),
– predictive notifications (about your fuel) based on your driving target and the real time vehicle data,
– it also would find the nearest Drive Now Vehicle,
– it could turn on your car (simulated by a hue light, but this could be a call to the CAN Bus of a real car)
– it could interact with google maps to find the best route.
– it had JWT based authentication in every micro service.
– it could schedule calls with your personal call center agent (you don’t need to call, but based on your problem and your customer relationship data and the calendar of the agent the best spot would be found) as well as rescheduling
Well in the end I did not perform (to my own expectations), but at least I learned somethings:
– reality-check: I am better then 1 year ago, but still mediocre compared to the best in the field. I will have to work harder.
– frontend is important: My failure was missing a WOW frontend, I should always keep practicing at least some frontend in the future
– i don’t present very well: I did not do this for some time and I seemed totally of my game, like a 4 year old. I have to get this fixed fast. I have to do more public vlogs and learning videos, go to hackthons where I present.
– it is all show: Technology does not count, as long as you have a great mockup that seems interactive it is enough. If your forced to use technologies you don’t like, just change the game and present a mockup how you would do it there but focus on the tech you like.
– starter kits matter: I definitely have to improve my starter kit to be more extendible and to be better on the eye also the analytics/data component is essential.
– only consider jobs where you have people better than you: Even though I failed miserably overall, it seems from a tech perspective I was one of the better ones. And in my job I focus on learning so I would need to be in a team where most of the people are better than me.
- Now I will just put some EDM on my ears and work hard and next time, which will then be my second hackathon, I will be better.*
Let’s get down to implement the persistent high performance message bus integration (based on the work Blizzard has done) into the starter kit.
- Problem: REST is a standard way to communicate with servers to retrieve data. It provides a specification based on the entities that we have present in our database. When done correctly, it can be more than adequate. When done wrong it can be a living hell.
- Implement HATEOAS [what is HATEOAS?] (Hypermedia as the Engine of Application State), and get a nice system that is flexible and easy enough to work with
- more simple alternatives like GraphQL
- GraphQL is an open-source project from Facebook that presents power with a simple idea… Instead of the application server defining the structure of responses, the client is given power and flexibility to request the data that it needs. GraphQL responses are tailored to the specific use case that the client is implementing, eliminating wasteful data transfer and providing future-proofing your API for use cases that your application hasn’t even encountered yet.
- Is GraphQL a flash in the pan? Technology is hard to predict, but Github’s recent preview launch of their GraphQL API is super encouraging, and an interesting study.
- Open Questions:
- How to combine GraphQL with a Event Driven backend?
- One query actually might indicate multiple events …
- GraphQL might be a good candidate to implement Event Sourcing/CQRS pattern. Just the fact that GraphsQL allows two different query types – Queries and Mutation is a conceptual direct map to the basics of the Event Sourcing pattern of separating the reads and writes, gives a good foundation to explore this pattern, alongside other advantages. [link]
- How to combine GraphQL with a Event Driven backend?