Data Visualization in its art form. Know the future that god only knew.

Listen to the Abstract of the session or read the text below.

With exponential growth of data, a new kind of problem is also in the rising; and that is the existence of large volume of useless data residing within useful data. Through filtration system, meaningful data needs to be suctioned out before any analysis. An independent study shows that there may be core information residing within voluminous data that can even change the course of life. For instance health technology seems to be the forerunner of innovation in the past. While this is not the case anymore, in the past data that was stored within these mega systems contains core information that can predict outcomes, if suctioned out or filtered out in its purity. But the sad story is that such core information is hidden within impure and unwanted or useless data. Therefore analysis becomes difficult.
The second part of the story is, when and when this data is filtered out, it will be residing as a huge ocean with no regularities. Here comes visualization in its art form.
This 45 minutes session makes an attempt to discuss this by throwing a base idea on the wall and drilling down further.
Imagination is most important than knowledge!
Register by tweeting me @sunnymenon. Only 25 Seats Available. The outcome of this is a design and move towards an opensource framework and thereafter diffusing the innovations across the technical, data science and visualizers communities.

Posted in Big Data BigData Cloud | 2 Comments

Protect your code, no matter opensource or not; DevOps, continuous integration and what not.

If you are battling to keep up with trends in the software industry, by continuously releasing enterprise products, adding more functionalities and improving user experiences, then someone needs to focus on who is protecting the code. Somebody needs to look into the process and methodology of how these code quickly gets built and rolled out and still protect the code.
Welcome to the world of coding, coders and a whole million lines of code.

“Oh I am not a monster, Its just that I am ahead of the curve” ~ Joker in Bat Man. ūüôā

Most often so, that which is usually done by operations team within large enterprises, today, a term called “DevOps” has set its large foot. Code is not only protected but deployments are continuously happening. What is being seen is, there are no huge benefits of two weeks roll out. According to an independent study, if these two weeks of roll out is not carefully planned, you are bound to constraints that will extend your life cycle of the project in general.¬†The end results that you would have attained within say X amount of time would cost you X+N amount of time. So check the plans. A small tab sheet for the devops is helpful anyways.

Are you having that tab?

When Georgi from this startup company came to me asking for help in safeguarding the huge set of code base, I didn’t realize that the damage has already been done. There are things that these guys have to do and have to do real quick, before it goes out of hand for a solid spin. You see, Georgi was smart enough to approach with his little doubts on issues he thought may pop up. It was a wise move. So here is the use case and the solution provided. Total cost involved, number of people, time lines and methodology adopted for implementation and delivery has been highlighted. Read on.

Scenario:-

  1. 16 Developers with 7 developers additional remote.
  2. 5 Testers one onsite and rest remote.
  3. Four different components; Front End UI, Event brokers/messaging layer as a transport layer to front end and back end platform, the database access layer and finally the platform by itself. 32 Integration check points including connectors to search repos, analytics repos and posting data to other 30 internal and external systems.
  4. Two to three weeks delivery cycle. Technically something was rolled out every two weeks as per Georgi who was managing all these activities and was the direct report.
  5. Back up servers not timely but on demand.
  6. SVN and Git local repos. However no synched up code as there were multiple developers checking out code from the same branch and same class files or code.
  7. The development environment was using Java, Javascripting, nodejs, rabbitmq, MySQL, Solr, Lucene, zookeeper, spark and spring boot and hibernate, redis, nginx and tomcat. Containerization was done using CoreOS and docker. Eventually they were planning to move towards one and consolidate it. I am working on evaluating CoreOS and docker for them and seeing which suits their environment.
  8. Currently there performance is being assessed through custom built performance tools and network access and speeds are tested through common tools and simulated users originating from outside the firewalls.
  9. Build tool is Maven combined with Gradle , Chef recipes, bamboo and jenkins interacting through hooks to Git. Also there is Bug Reporting tool for which integrating is asked for. SVN is standing alone with all javascripts being posted there. Reason was, We began version control and management with SVN.
  10. Cloud enabled.

Solution Visual

Posted in Big Data BigData Cloud | 1 Comment

Dockerization – Scalable architecture for high volume web application AND an open statement to the enterprises.

With containerization gaining momentum and kubernetes promising better deployment models, a question first rises; what kind of a model should one support when it comes to large scale streaming type of applications?
High volume use cases are understood by all and many of them have talked, others have been able to deploy high volume, huge scalable architecture model.
Dockerization or containerization is much needed within the healthcare and financial domains for sure, not to mention manufacturing and retail. However, feeble and shabby manual installations prevail within such organizations.
It is some times sad, pitiable and annoying to see such haphazard and immature deployments. People who have deployed such models would have been better off without such deployments and would have invested on learning about containerization. Nothing but words and BS flow from supposedly architecture discussions while consultants, engineers and vendor companies struggle through, spending half the time battling compliance and policiy issues and above all the jurisdiction problems such as who will do what and so forth.

Reminds me of a quote :-
“People who think they know everything are a great annoyance to those of us who do” ~ Isaac Asimov.

Its ok not to know; but it is completely stupid to oppose a budding model and kill it instantly.
As technology shifts and evolve into something beautiful, it is important for people to embrace aggressive technical discussions and forget emotional disturbances that can interfere with personal agendas and issues. Let there be emotional intelligence rather than personal emotional turbulences. Not many come to the enterprise to marry and please hey, nobody is inside to fight with people either.
By writing off, saying, such stupidity will exist within this world has now come up to a point where one needs to really “think differently“, isn’t it time to “think differently” ?

A search on the internet on scalable models or large scale deployment give architecture descriptions and youtune like places spending time on details of containerization internals and so forth. But not much exist in the form or diagrams or architecture model that can be adopted to simply use it. Individuals with eagerness and enthusiasms have been shot at during these discussions and have lost their flames. Let me stop before the subject diverts.

Below see if the model makes sense. Please feel free to give your opinions. If you have questions, do write it here. Collaborate for the sake of knowledge THEN COLLABORATE FOR THE SAKE OF MONEY.

<Please leave a comment or question. Feel free to give your opinion.>

******Architecture diagram below*****CLICK TO VIEW AS LARGE IMAGE*******

Thanks,
Sincerely,
Sunny

A few points to note.
1. This is a single unite within a cluster( I don’t want to call it worker because there are multiple workers within multiple containers here)
2. Number of boxes DO NOT represent number of components such as HA PROXY or NGINX. Containerization takes Docker into perspectives. Changing the box to COREOS may not be that of an efficient model.
3. Number of nodes is provided as an approximation (125 nodes)
4. Within a SWARM environment, nodes can be reduced. This model depicts a single unit cluster. (Worker in generic term, however I do not want to call it a WORKER Node)
5. This is to be viewed from an Application standpoint and not from INFRASTRUCTURE standpoint.

Leave your comments, whatever it may be. It is from the many mistakes that one learns a lot. If you ask, I will reply.

Thanks again.

Posted in Big Data BigData Cloud | 7 Comments

Eight things in the design of Apache Spark hadoop ecosystem.

There 8 things while designing an Apache Spark enabled application. Porting to a SPARK hadoop eco-system is an important step that is dictated by the need for streaming capabilities and extreme speed of execution. Apache SPARK uses clustering algorithms and can be used with HDFS making it a composite architecture. Unless you understand the business process and the incoming data, it would be in-efficient to build such architecture. Remember, from bigdata volumes comes value and NOT traditional reports.

1.SPARK relies on in-memory execution of tasks and storage. Because of this nature, it is important that you design your system having this thought in mind. Processes need to be built with this in view.

2. These days, writing in Java could be more efficient from resource standpoint and from the point of view that Java does its own concurrency better. Just because you have several API’s built on SCALA it need not necessarily speed up your execution. Therefore it is worthwhile to think of writing in Java.

3. SPARK architecture, may it be in the cloud or standalone, as it uses the in-memory space for data and executors, think about the heap size. Increasing heap sizes continuously to get it executed may reduce efficiency.

4. Using User Memory is not recommended unless your architecture really demands it for some core extremely high speed streaming needs such as in the case of fraudulent activities where a huge segment is to be detected OR a failure of a system within your APPLICATION cluster.

5.Take advantage of Unified Memory Management. Spark 1.6.x and above needed. This type of management appears to be using memory in a more dynamic way where the executor and data can push the limits if needed rather than a failure.

6.Consider nodes as individual machines. This will help in your infrastructure planning because every Spark executor in an application has the same fixed number of cores and same fixed heap size.

7.Before using Mesos, consider using hadoop/yarn.

8.Architecture is an art. So imagine, understand, absorb,design, travel through the design, re-design and architect, test small, test big, implement by deploying it in cloud;perhaps this is an ideal case and go live.

Meet me at #DreamForce #df16 . Know how would it benefit you and how to fix the meeting at .

Thank you.

Posted in Big Data BigData Cloud | 1 Comment

New Age Modernization, Dev Ops and You.

DevOps initiatives have already reached a turning point within enterprises. Many have set foot on the deep waters and they don’t know yet. This is mainly because of lack of involvement of third-party domain experts or external thinkers who can bring in fresh thinking within the enterprise. What should have been a change or shift in existing process to be more agile has ended up as nothing but an a deployment automation. Deploying applications to multiple systems are changing; no doubt, but that by alone will not suffice the need to be agile and be lean. This visibility across enterprises are a big problem. Those who can scale well will be winners in the future. They will be successful, however, to scale you need the visibility. One must know that the issues is not because of lack of technology or lack of processes but difficulty to scale well. This can only be attained from one type of personality; the people who are well versed in technology and equally good in understanding the business model and processes within companies. Somebody who has the vision about that particular domain. For egs, a company who is selling candies must think about what kids movies are being released in the market from time to time. For the aforementioned to happen, companies must implement systems or applications. Such applications must foresee multiple devices coming into the market space. Such is the new global modern market place.

Another key factor to look into, is the uprising of the cloud nature. It is predicted that in coming years, more and more enterprises will move towards the cloud. Cloud security is being redefined and this redefinition of cloud security will give rise to tremendous possibilities for small businesses to move to cloud environment and doing “cloud business”. Given the above scenario, it is very clear to all that the nature of doing business and therefore IT is evolving as a more complex work rather than developing a few lines of code. Integration, new technological breakthroughs, understanding multiple types of businesses and being able to scale, most importantly becomes important. Within this context, there are few things companies should do in order to prepare themselves. One of them which is being heavily talked about is the way to become agile. For agility Dev Ops is also being discussed and so is continuous integration and continuous deployment, Server Deploy which pretty much eludes to nothing but deployment to production. But this alone does not trigger change; because change is imperative for all of the above to happen. It should start somewhere. Here are top 10 things that you could think of and ignite the change.

Initiate the change through systems of engagements. Silicon valley based Author, advisor and a speaker, Geoffrey Moore wrote, where he defines about “Systems of engagement”. This touches upon modernization and IT. You can read it here in this wiki. You could also get video and presentations on this subject given by Geoffrey Moore.
Today processes are turning into Microservices. Traditional webservices still do exist but transformation is going on. This transformation is pushed from bottom up more than anything else, where developers are pushing this type of change more than business demanding it. Therefore it may be out of place many times. While microservices by itself has advantages, having it rolled out for already existing applications sporadically may cause investment drains and operational challenges.
Bring in new tools and services. Network has become part of the DNA in applications. When you have an application, the network participation in accessing the application has a major part to do with network and availability. So bring tools that help in managing several of these components. For egs, bring in Sophisticated API Management tools that are little different from already existing API management tools. Here check for network awareness and see how good they can detect nodes and automatically detect API’s and metadata attached to it. Visualization is another key factor to look for.
Identifying good resources have always been a challenge within software engineering. Who knows the code better is difficult at times to know. At the same time, a good coder may not be an appropriate coder for certain kind of work. Bring in leaders and mentors who could do this task. Inculcate positive feelings within teams. This is important when you are about to make a shift. Remember a change may be difficult, but the end result of change will be fruitful.
Hold tech talks like sessions. When doing so bring more people and allow them to talk. Make it interactive. Making it interactive requires the organizer to be a good leader. Develop leadership qualities. Follow linkedin leaders, influencers and bloggers.
Make meetings a necessity for the team members. Make it interesting. A leader must be knowledgeable and must be a leader of leaders today where social media pumps in thousands of things to the team members. What you are about to execute may be something that they have already seen and your initiative may be a thing of the past. For this to not happen, collaborate, discuss, share knowledge. Bring in new vendors, new people and brush shoulders with giants.
As mentioned above, make a linkedin or twitter account. Add them to your connections or follow to get their updates. One way to collaborate is by making your presence in social media because collaborations ignite innovation.
Conduct healthchecks or assessment services. What is the best way to know how good is your health? Simply by a healthcheck, you know a lot about your hidden treasures and how to utilize them. Constant check is important. Bring in third-party and get a second opinion or an unbiased view. If you want to do an ONLINE assessment to know where you are please click here.
Did you know, there are new kinds of decision makers. These decision makers don’t decide based on facts. They make decisions based on data that they see and the future state that of that data which predicts. These kind of decision makers are in the making. You should know this as we move up the stack.
A fish inside a fish tank only knows the nice generated oxygen and the abundant food given to it. What lies in the ocean and the vastness, freedom and the massiveness of its power to splash the water, the fish does not know. I would get to know by inviting them and hearing them talk about the ocean and blue sky. What I have seen farther and farther is by standing on the shoulders of giants. Remember it is not the end of the world.
For big data assessment on intuitive visualization, please connect with me. Using various bigdata visualization tools, applications can be developed that will address the data based, futuristic decision makers. Connect with me.

Posted in Big Data BigData Cloud | Leave a comment

DevOps Readiness Assessment. Take a three minutes online test.

With the deep rooting of the global economy, business processes are squeezed with a challenge to interact and handshake globally. Given this scenario, enterprise applications have become more complex and composite applications have begun to show cumbersome and shabby loading nature, failing without notice even after rigorous testing process. This failure in production systems happen in a complex heterogeneous environment because systems are interacting with external systems like never before. To create or simulate a typical testing scenario which is global in nature, where third party applications would run,is one way to test these applications. Yet failures happen as systems enter into new execution ecosystem where multiple systems race through highly buzzing network wires, where each competes to get its share, in fulfilling requests.

In this type of scenario, a new way of activation of business process is desired wherein, constant design, development, continuous testing and deployment to production happens. Teams should be able to collaborate and roll out constantly. Five major accomplishments should be achieved when the aforementioned activity is enabled. When enabled, industry is terming this as simply DevOps style or even agile. The five major accomplishments are:-

a. Being able to resolve major or minor errors, glitches or bugs needed to be resolved instantaneously and not wait for a new release cycle.

b. Teams should be independently able to roll-out business processes without much effort, and time to roll-out such business process should be considerably reduced.

c. Systems should be adaptable to changing business dynamics, positioning and strategies of the company and therefore,
pushing away businesses by giving future dates for activation of such business processes should be avoided.

My upcoming book. Subscribe or please do let me know to get a copy. First 25 copies will be free.
EnterpriseDevOpsBook

d.Teams should be able to collaborate with distributed teams. Teams may originate from anywhere across the globe and this provision must be given. No excuses should be given to the stakeholders.

e. Bringing up a failed system in production or making it run during a failure, even by operations working with developments MUST be avoided and for that matter, production systems should ONLY be touched for any maintenance purposes. For such an environment to be brought up, a slight shift in normal and traditional approaches need to take place. While deployments do happen through methodical and process oriented way within many enterprises, it is important to note that, a stable ecosystem mentioned above can be brought up within IT, only if you adopt a different culture and be able to accept the CHANGE. This CHANGE, in the way engineering is doing design, development and deploying today, is rapidly moving towards what is known as a DevOps environment.

The following questionnaire helps you know perhaps, where in the process you are and if you are moving in the right direction. A second opinion is what you will get here by completing the assessment form below. During the process, you will also get to know what are different questions one need to ask, during an DevOps implementation initiative. We have done. If the questions and what you read here did not help you in anyway, do suggest us anything you may feel is good and appropriate for the technology community and if it did help you at all even in a micro level, please do let me know. Earnestly I ask if you could send me an email and if you do, I will be ever grateful to you and also I can provide a fair amount of time to discuss with you over phone. I will contact you at your email your provided with more details.

Thank you. Please proceed to the form. Click the link below to take the TEST.

https://goo.gl/forms/GrMolzCAKPlWtJny2

PS: For a detailed case study on DevOps or Dev Ops implementation in San Francisco which was done at a very low cost and time, please let me know by messaging me in LinkedIn.

Posted in Big Data BigData Cloud | Leave a comment

Big data observation, inference and actionable items, leading to substantial results.

Big data analytics presenting substantial results Рinfographics . Big data companies must try to attain results from big data analytics. Today  generic reports are being provided to by big data companies. Analytics from big data must be futuristic.

bigdatainfographics

Big Data analytics

 

Posted in Big Data BigData Cloud, BigData Big Data Cloud | Tagged , , , | Leave a comment

There should be “Intellectual Ratings” for content publishing.

cloudtag

As content creators, small, big and devils create content and publish like shower of flames emitting from the mouth of fire monsters, regulators WILL evolve and become more powerful like the formation of the government, during the beginning of modern era. Regulations are important to begin with. But as the freeways in many parts were left open for speed, the internet must also open up. Net neutrality must not be the only one. Provisioning of content must also be opened up much more heavily along with net neutrality.  But, no common man or individual through writings or through propaganda can bring regulations to content publishing in the modern digital age where internet content dominates and more than 40% of knowledge and information sharing is kick started through sharing of internet content.

What can be done?
Crappy content, favored content, sponsored content and non-organic content in the form of information or favorably termed as informacial  or rarest of rarest content or even unique tasks being performed as absolutely useless will proliferate internet. Such tasks can be anywhere from eating an era old bread to elephant wearing loose trousers and girls standing in the rain with no clothes which to a greater extent is already been shown. Societies will slowly start enjoying such useless content and will believe in sham as real. These type of social behaviors will transform an intellectual society into a nothing but useless hollow society which will shy away from risks and seclude into themselves. Selfishness and lethargy will be the driving force and innovation will subside to nothing, creating an stagnated world.

Content provisioning through customized filtering to certain regions or geographical filtering or even personalization, which is the new name, if measures not taken, can lead one to confine individuals within those societies to limitations with human thoughts. Isn’t “human mind a¬†terrible¬†thing to waste¬†?” Regulations come in many different ways. Messages through philosophical or godly messages told to the common man becomes useful rather than enforcement based on punishments and consequences alone. While those must exist, the latter can be useful. Regulations or simply governance are ONLY to set and pave the way to a better society, may what it be. Therefore, let there be regulations on any type of content provisioning. Today, this can be easy. In order to control the mass, let there be control on the giant enterprises and these enterprises ill enforce regulations on the mass which uses the tools of the enterprises to publish content. Yes, openness is the ONLY way to have a free world. A ten commandments, or even less, that considers the global nature of the internet where even net neutrality is taken into perspectives and evaluated, will help. Let net neutrality not talk about access privileges in terms of speed of access ALONE, but also take into consideration, “visibility of content” to common man.

May you see what you want to read, and write what others need to see. ~Sunny Menon

Posted in Big Data BigData Cloud | Tagged , , , , , , | Leave a comment

World deadliest animals, what can man do?

Check out who kills the most. Classic representation and great data to share with the society we live in.
Infographic: The World’s Deadliest Animals  | Statista

You will find more statistics at Statista

Posted in Big Data BigData Cloud | Leave a comment

The word of “Love”, the Shakespeare story of #bigdata

  • #Bigdata spin off on the data on Shakespeare’s classics, provides new insights into his books and concepts. According to queries spawned off on words used by Shakespeare in his books, it appears that the infamous book of love and romance has the word “love”, only 134 times. While this is a good number, it should also be noted that in the book “Sonnets”, the word of Love has been written 157 times, higher than in the book “Romeo and Juliet. Midsummer night’s dream has 102 times and “Two gentlemen of Verona” has 147 times mentioned. Also note there is ONLY ONE MORE BOOK where the word has been written more than hundred times, the book is “As You Like It”. According to psychological factors and free association principles perhaps, the more words you use , a human mind has less of the feelings when the word is used more times. According to #bigdata interpretations, as you derive ¬†#Value¬†from the bigdata from Shakespeare’s books, should you interpret that the book that was heavily known to be a classic romance and love, is NOT really a book of love ? or should you interpret the imaginative mind which stylized and exaggerated the term “Love” ? Nevertheless, there is data and now it is time to derive the value. May be, we will get to know a different Shakespeare who did not always write books of love and perhaps, Romeo and Juliet was not really a book of love. “Wait until dark” because, for the blind knows more when the light goes off and darkness falls. Now, I hear the howling of the ¬†jackals and animals from the jungles, a frog croaked, snakes hissed. There was somewhere far, a distant roar echoing from the dark jungle where leaves shivered and rain fell cold…Enjoy The Halloween.
We are such stuff as dreams are made on, and our little life, is rounded with a sleep.
Better three hours too soon than a minute too late.
We know what we are, but know not what we may be. ~ Shakespeare 

Posted in Big Data BigData Cloud | Leave a comment