Why Node.js? - Deepstash

Why Node.js?

Node.js is a Javascript Runtime built on chrome’s V8 Javascript engine. It uses an event-driven, Non-Blocking I/O model that makes it lightweight and efficient. It can be anything ranging from Reading/Writing local files to making an Http request to an API. I/o takes time and hence blocks other functions, such as printing.



We have to do this, don’t we? 


Something that has happened in our app that we can respond to. There are two types of events in Node. 

  • System Events: C++ core from a library called libuv.. 
  • Custom Events: JavaScript core. 


A Node module is a reusable block of code whose existence does not accidentally impact other code. 


Require does three things: 

  • It loads modules that come bundled with Node.js like file system and HTTP from the Node.js API . 
  • It loads third-party libraries like Express and Mongoose that you install from npm. 
  • It lets you require your own files and modularize the project. 


These are libraries built by the awesome community which will solve most of your generic problems. npm (Node package manager) has packages you can use in your apps to make your development faster and efficient. 


Otherwise, here’s a quick step-by-step explanation of how the JavaScript Event Loop works. 

  • Push main() onto the call stack. 
  • Push console.log onto the call stack. This then runs right away and gets popped. 
  • Push Settimeout onto the stack. Settimeout is a Node API.
  • After registering it in the APIs, setTimeout gets popped from the call stack. 
  • Now the second setTimeout gets registered in the same way. We now have two Node APIs waiting to execute. 
  • After waiting for 0 seconds, setTimeout gets moved to the callback queue, and the same thing happens with setTimeout . 
  • Only one statement can execute at a time. This is taken care of by the event loop.


On the other hand, using a non-blocking request, you can initiate a data request for user2 without waiting for the response to the request for user1. You can initiate both requests in parallel. 



In the blocking method, user2's data request is not initiated until user1's data is printed to the screen. 


Deepstash helps you become inspired, wiser and productive, through bite-sized ideas from the best articles, books and videos out there.



Nostalgia Reloaded

Pop culture, be it movies, music or TV series, is increasingly and blatantly relying on the past, and using our feeling of the longing of the past, our nostalgia, to bait us into watching or liking the content.

The world, according to many people, is heading towards a wrong and dangerous direction, and we are longing for a less complicated life, trying to find it in the past.


Big data Hadoop
  • Ability to store and process huge amounts of any kind of data, quickly. With data volumes and varieties constantly increasing, especially from social media and the Internet of Things (IoT) , that's a key consideration.
  • Computing power. Hadoop's distributed computing model processes big data fast. The more computing nodes you use, the more processing power you have.
  • Fault tolerance. Data and application processing are protected against hardware failure. If a node goes down, jobs are automatically redirected to other nodes to make sure the distributed computing does not fail. Multiple copies of all data are stored automatically.
  • Flexibility. Unlike traditional relational databases, you don’t have to preprocess data before storing it. You can store as much data as you want and decide how to use it later. That includes unstructured data like text, images and videos.
  • Low cost. The open-source framework is free and uses commodity hardware to store large quantities of data.
  • Scalability. You can easily grow your system to handle more data simply by adding nodes. Little administration is required.

MapReduce programming is not a good match for all problems. It’s good for simple information requests and problems that can be divided into independent units, but it's not efficient for iterative and interactive analytic tasks. MapReduce is file-intensive. Because the nodes don’t intercommunicate except through sorts and shuffles, iterative algorithms require multiple map-shuffle/sort-reduce phases to complete. This creates multiple files between MapReduce phases and is inefficient for advanced analytic computing.

There’s a widely acknowledged talent gap. It can be difficult to find entry-level programmers who have sufficient Java skills to be productive with MapReduce. That's one reason distribution providers are racing to put relational (SQL) technology on top of Hadoop. It is much easier to find programmers with SQL skills than MapReduce skills. And, Hadoop administration seems part art and part science, requiring low-level knowledge of operating systems, hardware and Hadoop kernel settings.