Posted by & filed under Javascript, Work.

Yesterday I did my first ever technical presentation at Code District meetup on fundamentals of React.js library, which my team have been using in production for about 14 months now. Below is the slideshow for my presentation:

I appreciate Phillip North for inviting me to speak at this meetup.

Posted by & filed under Javascript.

In a previous post Building React.js with Grunt I have shown how we used Grunt to build our React files. But our issues were not over. We have a total of ~ 250 Javascript files in our project, not counting vendor libraries. Approximately 2/3 of these files are React views, while the rest are our Backbone models, collections, Chaplin controllers, libraries and miscellaneous application files. All of them are RequireJS modules.

Now the biggest problem with this was that to load main page of our application, browser needed to download 112 JS files. Using grunt-contrib-requirejs and a list of custom modules to optimize JS files helped a bit: now instead of 112 files browser loaded only 71. That’s better but far from ideal. At this point, I was having two problems: files optimized by r.js were not nested, and I had to manually maintain list of modules. After a bit of trial and error I have managed to decrease number of loaded files to just 9. How did I do this? Read more »

Posted by & filed under Abstraction errors, Itemscope.

In this and following posts I will talk about some of the stuggles I had (and continue to have) with taking my projects off the ground. I will talk about two: Itemscope and the Bra Dryer. I’ll talk about big ideas and big failures to execute.

First, the Itemscope. Originally I came up with the idea probably around 2009 as a result of frustration with how difficult it was to find trustworthy consumer product information, as well as frustration with media quality of product images posted on the web sites. It was also a period of my fascination with Semantic web (Web 3.0), and therefore initially I thought it was a great idea to create a huge semantically organized database (hence, the initail code name SODAq where “q” stands for nothing, really) of consumer products, where semantic part was mainly in the idea of structuring products as classes which inherit each other, as in an ontology. I envisioned that a web site like this would be able to provide complete technical specs for any product on the market, great quality photos and ability to easily show relationships between products, their parts and so on.

Itemscope original design

2008 was also a year when I was still doing PHP. So as my first step, I started personally designing the site and coding it in PHP and MySQL, spending 3-4 months doing that and eventually realizing weakness of my code base, and hitting a dead end when I knew moving forward without re-coding a lot would be hard. At that point I scrapped the initial prototype completely. Read more »

Posted by & filed under Javascript, Work.

About two months ago our Core team at Teradek has switched front-end to a single-page application on a new stack. Previous stack included server-side templates, Require.js, Underscore, Backbone, Handlebars for JavaScript and HTML, and Compass for CSS. Current stack is a little different: one static HTML file, Require.js, Lodash, Backbone + Chaplin + React for JS and HTML, and Stylus for CSS.

This short post is about React.js — not my opinion of it as a library, but about building it, giving you some tips and how-to’s.

File structure

Before incorporating building React files (JSX), we had a folder for views inside JS directory. We had an alias for “views” pointing to the respective folder for Require.js, and all references to JSX dependencies were using this alias. JS source folder looked somewhat like this: Read more »

Posted by & filed under Life, Work.

It’s been 4.5 months since my last post — some of the busiest time ever. Now, after 3 trips to Eastern Europe, Teradek R&D office in Odessa, Ukraine has become a reality, counting 1 UI/UX designer and 4 software engineers.

It’s been quite a learning experience in itself, and I will share about it in latter posts.

In the mean time, here’s a photo album of Teradek Ukraine — enjoy!

On a personal note, Odessa has become my new most favorite city in Ukraine, and I highly recommend you to visit it, especially during warm and sunny season!

Posted by & filed under Work.

It so happened that the system I am currently working on has a lot of parts that communicate in asynchronous manner. In part it’s a result of technology choice, in part — of environment in which system operates. All projects I’ve worked on before have been synchronous and fairly straightforward: to each request we always expected a reply, even if reply was an error code or message. And over years of software development it has become a part of my mindset.

If you can always rely on receiving a response to your request or query, except client and server error situations, software development and user experience is more or less streamlined. It is especially straightforward in simple server-client systems, such as servers serving web pages, or APIs. But in fact, although it feels almost natural in programming, it’s not as common in the natural world. One could argue that this is very common to human interactions, such as conversations, when one party asks a question and second answers it, it’s still not a pattern that simply dominates. In daily lives, we experience countless cases of asynchronous interaction pattern.

For example, I may be working at my computer and I may ask my roommate to boil water and make a tea. Do I have a guarantee that she will do it? Of course not. If I ask her to go to Mother’s Market and buy milk, but she may go or may not go, and even if she would end up going, she might do a dozen other things before it. Another example of this is writing a letter (well, if you still remember what a real letter is). You write it, put in an envelop, address and stamp it, and depending on where it is mailed, you may wait for days, weeks, or maybe months, if to ever receive a response.

Thinking on the architecture of the system we are building, it helps to train your mind to think of these processes as natural phenomena, and think of data passing between different purpose nodes as events, that may or may not occur, and in case these events / messages are sent, to abstract from the reason why they were sent. In reality, it is possible that there may be more than one reason a node can emit an event, but for the receiving end it may not matter if event was generated automatically, was caused by a user or was “requested” my another part of system. The addressee of these events may be completely indifferent to the reason, and be only interested in the message payload.

Implications for the above is particularly important when working on the user-facing client side of the system. For example, animated spinning processing indicators for XHR actions may not make much sense if user’s action translate to a series of events that make a few hops before they reach intended consumer, which then may or may not send an event back.

I will not get into specifics of the project at this time, but for such use cases as described above we’re strongly considering “optimistic actions”, when upon user submission, we presume success of this action and modify UI accordingly, and only report to user if any of these actions resulted in error or were not complete, of which we may learn from the system later. This approach may be used only with great caution, depending on how critical is to notify user of actual result of their actions and guarantee that they were applied exactly as desired by user.

I will continue writing on this topic as we have more developments in this project.

Posted by & filed under Javascript, Node.js, Work.

In our application architecture we have multiple servers that handle different types of connections, namely:

  • Requests from web clients (www and REST) over HTTPS
  • Socket.io connections from web clients over HTTPS
  • Socket.io connections from devices
  • TCP connections from proxy servers for devices over TLS

Above servers are spawned as processes by a manager app, and this can be replicated on multiple virtual machines. The challenge was to allow all of these processes to talk to each other. One example is events coming from devices via proxy servers need to bubble up all the way to web client via Socket.io, which involves two servers – Socket.io with web client and TCP with proxy.

Solution was to adopt a dispatcher pattern using Redis as pub-sub intermediary and have API similar to Node.js EventEmitter API, with additional methods off() and flush(), where off() turns off callbacks for a specific event, and flush() removes all listeners for given key. Redis server resides on a separate instance accessible to all servers in the cluster, therefore it can be shared by all subscribers and emitters.

Actual source code of this Dispatcher is below:

Posted by & filed under Abstraction errors, Javascript, Node.js.

Yesterday and today I’ve spent a couple hours debugging an issue with Prometheus ODM. Problem was, while testing by refreshing a page generated by an ODM model, I noticed that events inside current model are fired on all models created prior. Sample log output was like this:

about to emit "fobidden" to model_5
403 forbidden... model_1
403 forbidden... model_2
403 forbidden... model_3
403 forbidden... model_4
403 forbidden... model_5

From this output I realized that all event callbacks are added to the same instance of EventEmitter.

So, what was the error?

The error was that my model constructors in Prometheus inherited new EventEmitter methods, and they were added to model constructor prototype. Now model constructor’s prototype was shared between all model instances of the same type, and thus EventEmitter instance was same for all models as well.

Solution to this issue was easy: instead of extending model constructor prototype with new EventEmitter methods, I moved it inside model constructor, and extended instance’s this with new EventEmitter methods. It completely solved the problem.

These fixes are reflected in the newly published NPM module prometheus@0.1.5.

Posted by & filed under Javascript, Node.js, Work.

I’ve spent about a day working on adding permissions to the model. Here are intermediary conclusions:

  • We need to pass user session to model’s permission checker method is_allowed() in order to know session user roles and other data (e.g. company ID).
  • We need to check permissions and cache results at the same time we initialize model (or load existing model) so that our sync methods can stay sync (because checking permissions can be asynchronous), methods such as get(), toForm(), toTable() etc.
  • Not all models are created during request. In my application, they may be created as a result of Socket.io events, where user is the application itself. Therefore we can’t and don’t have to check permissions unless we have a user session object.

Thus, looks like in the current setup we need to pass req to model constructor as an option, and if it is not passed, we assume that the model is created by the application, so we do not check permissions.

Second take is that in order to check permissions, all permission checkers should be either plain synchronous functions, or promises, which we can run in async manner using map() method of an async library, but they can not be a mix of both.

Tomorrow I will refactor generic model constructor to accept options and to map supplied permissions checkers at model initialization.