Posted by & filed under Javascript, Node.js, Work.

I just started working on adding permissions to models in my WIP Prometheus ODM.

The idea is that I, a developer, want to describe granular access rules to resources. I am going to deal with the most common access types: CRUD (Create, Read, Update, Destroy) + Transfer.

In a typical system there also most generic role types: guest, user (someone with an active session), resource owner and administrator.

Specifically, in the Node.js application I am working on now everyone accessing web or REST interface is considered a user, and until they sign in using credentials, they’re assigned a guest role. Once they authenticate in the application, their roles are either loaded from the database, e.g. role of administrator, or they must be dynamically calculated based on user’s relation to a given resource, e.g. owner. In practice there may be more roles that the above four, therefore we want to be flexible and allow developers to add any number of arbitrary roles and corresponding privilege checkers, e.g. company_user, group_user, the_posse, etc.

Now every time anyone is trying to perform an action on an object, we want to know if their privileges allow them to do so. In the simplest form, we want to have a function corresponding to an action, that returns true or false depending on the result of user privilege check.

Prometheus ORM will focus on the four basic roles and corresponding privilege checkers. In order to use privilege checks on the model, I have added new properties to model options. Now each model description has two new properties: permissions and roles — latter is optional. Roles property lists additional, custom roles, in the same format they are described in lib/includes/roles.js, which means they must have a check(session_user) function, returning a boolean, such as in the following gist:

Implementation of privilege checks is incomplete. For now I intend to describe roles by their fk_param (if applicable) and hash of the current session’s user, which needs to have an array with user’s roles.

I will update as I have more progress in implementing user permissions for this ORM.

Posted by & filed under Node.js, Work.

Earlier this week I’ve spent some time on a problem of scaling Socket.io in a Node.js cluster. Essence of the problem is, when you run multiple Node app threads (workers) on a server, or multiple servers, socket.io clients connections are routed by cluster in a random round-robin manner, and handshaken / authorized io client requests get handed to workers where they are not handshaken / authorized, where the mess begins. This happens if socket.io sockets created by workers use memory store and do not share transports between each other, or in other words, are not scale ready.

This is a known issue and StackOverflow has a few similar questions:

http://stackoverflow.com/questions/8563401/node-js-multi-threading-and-socket-io

http://stackoverflow.com/questions/10703597/scale-node-js-socket-io-with-3-servers

http://stackoverflow.com/questions/8563401/node-js-multi-threading-and-socket-io

http://stackoverflow.com/questions/5944714/how-can-i-scale-socket-io

http://stackoverflow.com/questions/5739357/how-to-reuse-redis-connection-in-socket-io

http://stackoverflow.com/questions/7972077/node-scale-socket-io-nowjs-scale-across-different-instances

And more mentions elsewhere on the web:

http://www.quora.com/How-do-I-scale-socket-io-servers-2 – see top answer by Drew Harry.

https://delicious.com/alessioaw/socket.io – collection of links by alessioalex

http://www.ranu.com.ar/2011/11/redisstore-and-rooms-with-socketio.html

http://adamnengland.wordpress.com/2013/01/30/node-js-cluster-with-socket-io-and-express-3/

Native Socket.io solution

Socket.io developers Learnboost suggest to use Redis store, which is built-in to Socket.io:

or simply

io.set('store', new RedisStore());

I have tested this approach and it does not work. But it seems that I’m not the only one. Therefore a lot of sources above suggest different approaches and different architectures for scaling socketio. In my case, client connections seem to be trying to repeatedly try to re-handshake after disconnect, and socket.io server would not emit events to clients, because transports[id] would be null after initial connect. I have tried to look into these issues spending a few hours but I do not have a definitive answer.

Other approaches

Drew Harry (see Quora link above) suggests splitting Node app to three different pieces and have them talk between each other via a message queue or a pub/sub:

  1. Application core. This does all the actual application logic, and holds the state of the system in its own memory, or relies on some datastore. These application cores can usually be easily scaled up by partitioning in some application-specific way.
  2. Socket.io layer. Clients connect directly to this, and it passes any messages from clients to the app core. Messages from the app core to clients are dispatched to the appropriate socket.io process which then sends the message on to the client.
  3. A load balancer. This could be nginx like in the examples elsewhere in this thread, or it could be a smarter app that can talk back and forth with the socket.io layers to measure their actual load and direct new connections appropriately.

Although I don’t see how this approach solves a problem of running Socket.io on different workers, but possibly his idea is that managing load of socketio server is the solution, instead of scaling socketio server.

Another company who faces same issue is Trello.com who rely heavily on Socket.io. They describe exactly the same issue:

The Socket.io server currently has some problems with scaling up to more than 10K simultaneous client connections when using multiple processes and the Redis store, and the client has some issues that can cause it to open multiple connections to the same server, or not know that its connection has been severed. There are some issues with submitting our fixes (hacks!) back to the project – in many cases they only work with WebSockets (the only Socket.io transport we use). We are working to get those changes which are fit for general consumption ready to submit back to the project.

(Source: http://blog.fogcreek.com/the-trello-tech-stack/#asterisk)

Other developers turn away from Socket.io completely in favor of other libraries, such as . This comes from Ryan Smith who posted this question on StackOverflow:

Sadly we turned away from Socket.io due to the issues we encountered with this project and switched to Sock.js (github.com/sockjs/sockjs-node) and have yet to look back. I haven’t seen the latest changes to Socket.io but I have heard that version 1.0 will include many fixes including the issue with the redis store. One thing to keep in mind if you consider Sockjs is that is a much lower level library than Socket.io, so if you need channels and groups you will have to build that out your self.

As for myself, I need to revisit this issue later. For now, the main takeaway for me is running Socket.io servers on separate layers and do not even try to scale them, and scale only the core application itself.

Posted by & filed under Bra Dryer.

The other day I was reading Inc Magazine April 2013 issue’s “The 8 Best Industries for Startups” article and it had a picture of a prosthetic limb with hexagonal surface texture similar to the one of my bra dryer’s front cups, by a company Bespoke Innovations:

Bespoke Innovations prosthetic limb covers

And here’s the image of the Bra Dryer’s cups:

Bra Dryer front cups

I did a little research online to see what other shapes of covers are made by Bespoke Innovations, and found some more images:

Bespoke Innovations Bespoke Innovations Bespoke Innovations Bespoke Innovations Bespoke Innovations

I was already considering 3D printing for making a prototype of the dryer, but now I even know who will for sure be able to make it: 3D Systetms, a company who makes 3D printers used by Bespoke Innovations also offers custom low volume parts and printed prototypes. So far I have ordered their samples, and when CAD drawings of the Dryer are ready, will send them to get a quote.

Posted by & filed under Javascript, Node.js, Work.

This is about a WIP project of making an ORM for Node.js with pluggable adapters, with code name “Prometheus”:

https://github.com/shubik/prometheus

The idea was to make a simple ORM with a fairly standard API (get, set, save, destroy, etc.) with adapters for different databases which pretty much offer CRUD and a couple extra convenience methods.

We need a way to describe models using model-specific schemas and optional model-specific class or static methods and mix-ins (which are also applied to model constructor’s prototype), as well as optional hooks used during the model’s lifecycle.

In order to have this, we have a generic model (or model factory) which takes model-specific options and returns a constructor function used to instantiate a model. Model factory creates a generic constructor and augments it with model-specific options (e.g. schema, store, mixins, static methods, prototype methods and hooks). Pretty straightforward so far.

Now a model constructor that we get uses deferred/promise pattern where we expect asynchronousity. The debate that I had was between hiding promises inside the model and having each method execute only once model has been initialized (e.g. created new or loaded from database), but I did not feel that this would be a consistent pattern.

I still decided to use an internal “ready” promise but it’s not used by all methods – only by async methods such as save() or destroy().  Here’s a gist from the generic model constructor:

Generic Model has an internal load() method which is called if we provide model ID in constructor arguments. If load() successfully fetches our model from the database, this._loading is resolved with this or the model itself, otherwise it’s resolved with null.

What is important is that model constructor itself returns us a promise, or this internal this._ready property, which, as we remember, is resolved with an actual model. Therefore every time we initiate a model, we add deferred style callbacks for success and fail. Success callback has model as argument, and error callback has error as argument.

So, instantiating a model and doing things with a model looks like this:

Above example is for creating a new model, because as you can see we do not provide model id to the constructor function. Example below will do the same for an existing model:

Using same reasoning, model’s instance methods which are async do the same thing as model constructor: they return a promise which is resolved with model (or this) once database action succeeds or fails. It is not necessary to add any callbacks to these methods — you can use model.save(), model.destroy() without callbacks. But in practice we will not send response to client unless we have a confirmation or error result from these methods.

I will continue to write posts about development of this ORM when there is reasonable progress. In the mean time you can check out my Gitgub repo at https://github.com/shubik/prometheus.

Posted by & filed under Javascript, Node.js, Work.

As mentioned in my “Hello world” post, I’d like to share my experiences, challenges, solutions and abstraction errors related to my software development work. I can’t add them retroactively, so I will start with my current project: a web app I am building for Teradek, where I’ve been for 3 months so far.

Leaving aside details about why we use this specific stack for this project at Teradek, but here’s what we currently use (in development environment):

Server:

  • Amazon EC2 general purpose instances for app and database
  • Nginx for port forwarding
  • Node.js + Express
  • Redis for temporal data
  • MongoDB for persistent data
  • Medikoo’s Deferred library as a very simple promises implementation
  • Flatiron’s Neuron lib for simple job management
  • SocketIO for async communication with our encoders and decoders
  • Proprietary ORM developed by myself (more on this in later posts)

Client:

  • Twitter Bootstrap with custom theme as main UI framework
  • Sass + Compass
  • Require.js
  • Backbone.js + Handlebars
  • jQuery + some libraries
  • SocketIO for communicating real-time events between clients, server and devices
  • Builds done in Grunt.js

Because at this stage I’m mostly working on the server side infrastructure, my next few posts will be on Node.js and specifically, ORM that I’ve been developing: concepts, errors, challenges and major decisions. So, sit tight!

Posted by & filed under General.

Hey, it’s Alexander of Irvine, California. Until yesterday, May 29 or 2013, this web site was a part of my personal history that I did not want to let go. It was a site of my old t-shirt line which I started back in 2005, only a year after I came to America. But last night… or actually this morning around 4 AM when I could not sleep after an extra Rockstar at work, I decided it’s time to let it go and use my last name dot com for something more relevant to what I am and what I do today.

I will skip the part about who I am, and will jump into what I do. I am a senior software developer at Teradek LLC (www.teradek.com) working on a very exciting project there since March 2013. At the same time I have a couple cool projects of my own — one is software related, another is something completely different. I can’t boast about either of the latter two, but I am going to make all that I can to try to make them a success.

First project is Itemscope (www.itemscope.com). I started it back in 2010, and the reason for it was my frustration with product information on the web. Not going to go into details of the concept in this post, but Itemscope was initially envisioned to be a trusted source of consumer product data implemented as a semantic web resource. Current concept is a bit different, although I still hope to build it using principles of semweb.

Second project a different animal. It’s The Bra Dryer (www.bradryer.com), a result of a series of iterations of industrial design that is supposed to be solving a problem of drying women’s bras (I know, some argue that such problem even exists, but we have supportive evidence). More on this in the posts to come.

I am not much of a blogger but I will try to share with the world the best that I know or come across in software development (Javascript, Node.js and related technologies), music, lingerie and life. Stay tuned!