18. August 2016


uppy (GitHub: transloadit/uppy, License: ISC, npm: uppy)

Uppy is very promissing file uploader that can upload files from local disk, Google Drive, Dropbox, Instagram, remote URLs, and hardware devices such as a webcam. You can upload basically anything you can think of, because of the plugin pattern it follows. It also does not have an opinion about where you should upload it to

Uppy is being developed by the Transloadit team, because they felt the need for a better uploading experience for both for users and developers alike.

On the website the developers have a really wonderful example which also doubles as their playground.

import Uppy from '../../../../src/core/Core.js'
import Dummy from '../../../../src/plugins/Dummy'
import Tus10 from '../../../../src/plugins/Tus10.js'
import Dashboard from '../../../../src/plugins/Dashboard'
import GoogleDrive from '../../../../src/plugins/GoogleDrive'
import ProgressBar from '../../../../src/plugins/ProgressBar.js'
import Webcam from '../../../../src/plugins/Webcam.js'
import MetaData from '../../../../src/plugins/MetaData.js'
import { UPPY_SERVER } from '../env'
const uppy = new Uppy({debug: true, autoProceed: false})
  .use(Dashboard, {trigger: '#uppyModalOpener'})
  .use(GoogleDrive, {target: Dashboard, host: UPPY_SERVER})
  .use(Webcam, {target: Dashboard})
  .use(Dummy, {target: Dashboard})
  .use(Tus10, {endpoint: 'http://master.tus.io:8080/files/'})
  .use(ProgressBar, {target: 'body'})
  .use(MetaData, {
    fields: [
      { id: 'resizeTo', name: 'Resize to', value: 1200, placeholder: 'specify future image size' },
      { id: 'description', name: 'Description', value: 'something', placeholder: 'describe what the file is for' }

Which provides you with this interface:

I think one of the greatest features is the resumable download support. It let’s you pause an upload and continues it at a different moment in time.

As the developers make very clear is that this project is not ready to be used in production yet. But they are working really hard to make it battle ready as soon as possible. I was especially impressed with the amount of commits that are coming into this library so I would expect us to have to wait very long.

If you want to know more about this amazing library head over to uppy.io

17. August 2016


Neo (GitHub: mozilla/neo, License: MPL-2.0, npm: mozilla-neo)

During the daily challenge of finding something interesting for you guys to read, I came across this head line “Create and build React web applications with zero initial configuration and minimal fuss”. This intrigued me, like most clickbait does. So starting looking into it and first thing that popped into my head was: “Oh no, not another project generator.” But then I saw it was posted on Mozilla’s Github account which made it interesting.

Neo is like a lot more project an generator for getting a React app of the ground really quick. With most generators I’m always missing curtain things but the scaffold Neo gave me was very nice.

It uses React, Redux, React Router, Webpack and ES2015 modules. Tests and coverage is done with Karma, Mocha, Chai, Enzyme for component tests and Immutable for the data structures. Basically the same way I would setup a new front-end. It works out of the box and you do not need to sift through a bunch of poorly written documentation pages to get started.

When you want to use Neo to generate a new project for you, you will need to create a directory for the project first.

mkdir <my awesome project>;

Now you can install Neo inside of that directory:

npm i mozilla-neo;

To start the generator you will need to call the binary from the node_modules directory like so:

node_modules/.bin/neo init

Alternatively you could install Neo in a global context by adding the -g flag to the the initial install after which you could use it like:

neo init

The init command will prompt you with a few question about you project, like what’s the name of the project, who’s maintaining/authoring it etc. Once it has all information it will start the generation process after which it will install the dependencies. That’s really all there is to it.

I recommend that you read the post written by the author of the library Eli Perelman to get a broader impression of what Neo can do for you.

16. August 2016

Fast Memoize

fast-memoize (GitHub: caiogondim/fast-memoize.js, License: ISC, npm: fast-memoize)

First of all let me start of with informing you about why there hasn’t being a new post for several weeks now. To be completely honest with you guys, I just got very lazy. I’ve just bought my first house with the fiancée. Which of course needed to be fixed up, we needed to restore damage to the previous house and we actually needed to do the move. Besides all of that I also started at a new client which took some more time to get used to. But hey those are all excuses and you don’t really care about that. You just wanna see code. So now that I’m back and integrated into all the new environments around me I’m ready to start updating you again. So enough chit chat let’s look at fast-memoize.js.

According to Wikipedia: “In computing, memoization is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again. “.

This library is an attempt to make the fastest possible memoization library in JavaScript that supports any arguments. This is not the first attempt on doing that. There have being multiple attempts but as the developer points out those attempts are not fast enough and have limitation on number of arguments which can be passed.


The reason why Lodash comes out on top on these benchmarks is because they limit the number of arguments you can pass, thus gaining performance.

When you want to memoize a function you can do so by creating it like this:

const memoize = require('fast-memoize')
const fn = function (one, two, three) {
  // Awesome magical code in here

const memoized = memoize(fn);

memoized('foo', 3, 'bar');
// Call it again
memoized('foo', 3, 'bar'); // Cache hit

The library takes a look at the environment it in running on and selects the quickest cache to work with. If you want to implement your own cache make sure it has the following methods:

  • get
  • set
  • has
  • delete

The support is also very good for details about that you should consult the README.

If you have any interest in helping me make Daily-JavaScript.com better, don’t hesitate to contact me with your ideas.

09. May 2016


PeerJS (GitHub: peers/peerjs, License: MIT, npm: peerjs)

Last week I was held up in bed with a fever, this is the reason I was not able to provide you with daily javascript updates. Exactly for moments like that I would like to request if one of my readers would be interested in occasionally writing a post, so I can slow down sometimes.

If you are interested please contact me.

On to business, PeerJS wraps the browser’s WebRTC implementation to provide a easy-to-use peer-to-peer connection API. To establish a connection you will need nothing more than a session ID. This is of course not the first attempt on a good WebRTC library but this comes with an open source server which you can deploy any place you want, in contrast with other parties that’s a real leap forwards.

Because this is still a company trying to make money they also have hosted solutions for when you do not want to be bothered with setting up these environments.

Let’s look at an example using the hosted solution.

import Peer from 'peerjs';

const myPeer = new Peer('pick-an-id', {key: 'myapikey'});
const conn = myPeer.connect('another-peers-id');

conn.on('open', () => {

myPeer.on('connection', (conn) => {
  conn.on('data', (data) => {
    // Will print 'hi!'

The code for making a call would look something like this:

const getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia;

getUserMedia({video: true, audio: true}, (stream) => {
  const call = peer.call('another-peers-id', stream);
  call.on('stream', (remoteStream) => {
    // Show stream in some video/canvas element.
}, (err) => {
  console.log('Failed to get local stream', err);

And to answer an incoming call you would need to add something like this:

const getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
peer.on('call', (call) => {
  getUserMedia({ video: true, audio: true }, (stream) => {
    call.answer(stream); // Answer the call with an A/V stream.
    call.on('stream', (remoteStream) => {
      // Show stream in some video/canvas element.
  }, (err) => {
    console.log('Failed to get local stream', err);

20. April 2016


redux-mock-store (GitHub: arnaudbenard/redux-mock-store, License: MIT, npm: redux-mock-store)

When you are working on an application with a Redux architecture and you want to test if your actions triggered correctly. Doing this with the default Redux store is a very hard thing. In the project I recently joined we solve this problem by using redux-mock-store, which stores the actions that got triggered. By suppling a getActions() method we can access the previously called actions in the form of an Array.

By example:

import configureMockStore from 'redux-mock-store';
import thunk from 'redux-thunk';

const middlewares = [thunk];
const mockStore = configureMockStore(middlewares);

it('should dispatch action', () => {
  const getState = {};
  const addTodo = { type: 'ADD_TODO' };

  const store = mockStore(getState);

  const actions = store.getActions();


it('should execute promise', () => {
    function success() {
      return {
        type: 'FETCH_DATA_SUCCESS'

    function fetchData() {
      return dispatch => {
        return fetch('/users.json')
          .then(() => dispatch(success()));

    const store = mockStore({});

    return store.dispatch(fetchData())
      .then(() => {

In the Redux docs you can find a more extensive explanation and best best practices on how to approach this problem.