Node Modules

This week I worked on and helped publish a node module for pushing source maps to a public API. It’s only the second time I’ve worked on a from-scratch node module, and it’s enlightening — like pulling back the curtain to reveal the truth behind the Wizard of Oz.

Node modules aren’t magical (though they’re no humbugs either). They fulfill a pretty specific purpose: a way to encapsulate reusable code in a way that’s easily managed.

Why node modules matter

Before node modules were really a thing, reusable code snippets were often pulled out into Immediately Invoked Function Expressions (IIFEs) or stored inside of variables (the Revealing Module pattern).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
var sayAName = (function () {
     var privateName = 'name is not set yet';

     function publicSetName (name) {
          privateName = name;
     }

     function publicSayName () {
          console.log('Hi, ' + privateName);
     }

     return {
          setName: publicSetName,
          sayName: publicSayName
     };
})()

sayAName.setName('emily')
sayAName.sayName()
// 'emily'
Example inspired by Addy Osmani’s Design Patterns

But with this pattern, you don’t have much ability to manage dependencies or make explicit versioning changes. That’s where node modules are so handy. Not only is the code easily reusable and self-contained, but a version change is as easy as bumping a number in the package.json.

Node modules come in a couple different formats, but the new ES6 format is the one I’m most familiar with.

1
2
3
4
5
6
7
// lib/bye.js
export function sayGoodbye(){
  console.log('Bye');
}

// app.js
import { sayGoodbye } from './lib/bye'

ES6’s native module format uses export and import tokens to share consts, functions, and objects between files.

(I’ve also used module.exports and require, the CommonJS module format, in various projects depending on whether we’re transpiling to support ES5.)

1
2
3
4
5
6
var dep1 = require('./dep1');
var dep2 = require('./dep2');

module.exports = function(){
  // ...
}

Publishing node modules

Publishing a node module to npm is a pretty sweet feeling – like getting a book published, except much easier. Although versioning comes with its own quirks.

When working on the node module for pushing source maps, we made several small changes to the 4.0.1 release to fix some tiny bugs. We published those changes to npm as pre-releases, indicating that the changes were not yet stable. The pre-release version numbers looked like this: 4.0.1-2

When you installed @latest or an unspecified version number, though, you got the pre-release instead of the release (4.0.1). That’s not ideal behavior; you want people to be able to opt into prereleases.

Here’s the trick: when publishing to npm, you can tag a release with prerelease or ‘next’ explicitly, which will override the default tag of latest. Then you can make incremental changes to your heart’s content until it’s time for the next real version bump!

Sources

Gosh Yarn It

You’ll never believe it… A hot new technology has appeared on the Javascript scene.

What is yarn?

Yarn is a replacement for the npm package manager, the popular tool used to handle the ~300,000 packages in the npm registry.

While Yarn is designed completely replace the npm workflow, it works in concert with the existing package.json: adding or removing packages with Yarn will update both the package.json and yarn.lock.

The main commands you’ll need:

1
2
3
4
yarn  // same as npm install
yarn add  // same as npm i --save
yarn add --dev  // same as npm i --save-dev
yarn remove  // same as npm uninstall

Why bother to switch from tried-and-true npm?

Locking down the right versions of all your dependencies can be tricky.

For example: I accidentally upgraded react-addons-test-utils to a new version by adding it directly with yarn add react-addons-test-utils. This got the latest version of the package, which meant it no longer had the same version number as react and react-dom – we had specifically pinned those at an earlier minor version.

When I ran the tests, this error popped up: Module not found: Error: Cannot resolve module 'react-dom/lib/ReactTestUtils' in /Users/ebookstein/newrelic/browser/overview-page/node_modules/react-addons-test-utils (module was moved between versions of react-dom)

To pin react-addons-test-utils at a specific version, I added the version to the package name like this: yarn add react-addons-test-utils@15.3.0.

Of course, locking down dependencies at the right version numbers is something that npm did for us as well. Why replace it?

There are some minor benefits that came across right away from using yarn instead of npm. For one thing, I no longer have to remember --save – think of all the time you’ll save not pushing branches up to Github only to have your tests fail!

But there are some big-picture benefits too. We were using npm shrinkwrap to stabilize versions and dependencies. But shrinkwrap easily gets out of sync with package.json if you forget to run npm shrinkwrap, and when it fails, it fails silently.

If you want the full-length list of big-picture reasons that Facebook came up with, check out their blog post about yarn.

Switching to yarn: a few tricky bits

Installing yarn itself is easy.

brew install yarn

Then go to your project directory and run yarn to install all the packages and generate a yarn.lock (similar to Bundler’s Gemfile.lock for Ruby).

Migrating from npm to yarn is not totally seamless, however. For some reason, a few packages don’t transition as easily. In particular I had issues with node-sass and phantom-js. In one case, the issue was resolved by specifically adding the module with yarn (yarn add node-sass). In the case of phantom-js, though, we had to add a post-install script to our package.json that would run a phantomjs install script.

Also, if you use Jenkins or other tools that keep a saved copy of your node_modules, you might have to clear out the old node_modules and reinstall.

For Jenkins specifically, try:

  1. clearing the workspace,
  2. killing the instance of Jenkins, or
  3. add a post-build step to delete the workspace every time (which therefore runs a complete installation every time)

Fun bits

Fun fact: yarn preserves many helpful features of npm, like symlinking.

For example, if you wanted to use your personal copy of react instead of Facebook’s react (pretend!), then you could run yarn link inside of the react directory, then yarn link react in your project directory. This would then substitute a link to your personal react copy for every require(react).

Tying up loose ends

So far, it’s been fun to knit yarn into our development process! Though I’m still working through some of the knots…

Whoop Whoop! Event Loop!

Last week a friend asked me, Is Node is multi-threaded? How would Node do asynchronous work if it’s not multi-threaded? Good question! I didn’t know the answer.

But as it turns out, Node (and Javascript in general) is single-threaded. This means there is only one process, one flow of control. Instead of having multiple threads to process simultaneous work, the Node runtime environment has an ecosystem of interconnected data structures that preserve a fast workflow. At the conceptual center of these structures is the event loop.

All modern Javascript engines rely on an event loop concurrency model, not just Node. So, browser-lovers, read on.

A stack, a queue, an event loop - oh my!

Image: Mozilla Developers Network (annotations mine)

This is a simplified diagram of a Javascript runtime. (I added the “Web APIs” box for browser-related completeness.)

Start with the stack. You’ve probably been exposed to the stack before, especially in the form of stack traces for Javascript errors. A stack trace, like the stack itself, is made of the series of functions and local variables that have been pushed onto the stack. Each of those functions and its local variables make up a frame in the call stack.

Functions are added to the stack from the main Javascript program – or, from the message queue. The queue contains callback functions that are ready to be, well, “called back.” In the browser, callbacks may be associated with DOM events like “click” or “hover”; or, for both backend and frontend Javascript, they may be associated with resolved Promises. In fact, they may contain responses from any number of Web APIs, like timer or the infamous XMLHttpRequest. (I didn’t realize those were separate APIs until I started learning about the event loop!) When a message is processed, its callback function gets called and thereby is added to the stack.

The event loop is the loop that processes messages in the queue such that the callback functions get pushed onto the stack.

For an animated demo of how the event loop works in the browser, check out Philip Roberts' 2014 JSConf talk.

Image: Philip Roberts, 2014

Much non-blocking

A very interesting property of the event loop model is that JavaScript, unlike a lot of other languages, never blocks. Handling I/O is typically performed via events and callbacks, so when the application is waiting for an IndexedDB query to return or an XHR request to return, it can still process other things like user input. – MDN

Thanks to the event loop model, Javascript is non-blocking. That means you don’t have to wait for slow tasks to complete before processing other tasks. We just keep pushing or popping frames on and off the stack; some of those function calls add work to someone else’s plate (by calling a web API or making HTTP requests, for example), outside of our single process; we receive callbacks in our queue when that external work is done; and whenever the stack has capacity, we pull in those callbacks.

So, even though Javascript is single-threaded, the event loop model allows it to be fast and non-blocking. We just keep grabbing one thing at a time, one thing at a time, one thing at a time, keep on going forever. We are a machine.

Resources

Testing an Express Server

A couple weeks ago, I wrote a suite of tests for a new Node-Express service.

Up until that point, all the JS testing I’d done was for React components. Writing tests for an Express app is a little different, but was actually really fun and easy with the right tools. So here’s a little write-up on some testing tools and tips for backend Javascript!

Tools

You will need one or more of the following tools:

  • Mocha: a test runner. In package.json, simply add "test": "mocha" under “scripts” and all tests in the test directory will run.
  • Chai: an assertions library for Javascript, offering both an expect API and an assert API.
  • Supertest: an HTTP testing library that lets you easily send HTTP requests to a local server.
  • Node-tap: a test library implementing the Test-Anything-Protocol for Node.

Most Valuable Player: Supertest

My favorite new tool on this list is the Supertest library. It is SO easy to hit each Express endpoint with a Supertest request and examine the response.

The way it works: require Supertest and pass it an Express application. This returns a supertest object onto which other methods can be chained. For example, you can set attributes with “get(‘/path’)” or “set(‘Content-Type’, ‘application/json’)” that modify both the object and your eventual request to the server. Expectations for the response are also chained onto the supertest object.

With mocha/chai/supertest:

1
2
3
4
5
6
7
8
9
const request = require(supertest)
const app = require('../app')
it('gets a 200 response', function (done) {
        request(app) // this returns a SuperTest object
        .get(/data)
        .set('Content-Type', 'application/json')
        .expect(200)
        .end(done)
})

Asynchronicity

One interesting aspect of writing tests for an Express server is the fact that your tests must run asynchronously. After all, you’re sending a request to your server; if your tests proceeded onwards full steam ahead, you might run into an expectation that checks for a response before the response has even arrived. Mocha also needs wait for the current test to finish before heading off to other tests.

Mocha has two ways of handling asynchronous tests:

  • pass in a done callback that is called when the test ends, or
  • use Mocha’s built-in Promise handling.

Now that Mocha actually handles Promises in-house, it’s considered better practice to use .then and .catch rather than .end(done).

With mocha/supertest:

1
2
3
4
5
6
7
8
9
10
const request = require(supertest)
const app = require('../app')
it('gets a 200 response', function () {
        request(app)
        .get(/data)
        .then((res) => {
           if (!(res.status === 200)) { throw new Error } // make sure to actually throw an error so the test fails
        })
        .catch(err => {throw err})
})

Node-tap can also be used this way, with .then and .catch callbacks on a Promise.

node-tap/supertest:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
const request = require(supertest)
const test = require('tap').test
const app = require('../app')
test(gets a 200 response', (t) => {
  t.plan(1)
  request(app)
    .get(‘/data')
    .then((res) => {
      t.equal(res.status, 200)
    })
    .catch((err) => {
      t.fail(err)
    })
})

Now sit back, relax, and watch your tests run.

Express Is Adorbs

Express is adorable. It’s the tiniest, cutest little web framework.

Besides adorable, Express is a lightweight web framework for Node.js. It addresses a few basic needs of Node applications.

Node Express
need to implement whole HTTP server convenience methods for easy-to-start server
need a router to map requests to request handlers routes requests to designated handler based on HTTP verb + path
need to actually handle requests callbacks handle requests!

Hello world

A “hello world” Express server is very simple. Besides the standard package.json + node_modules (libraries) that come with a new Node project, here’s all it took:

$ npm install express --save

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// app.js

var express = require('express')
var app = express() // creates an Express application

// `app` has methods like `.get` for routing HTTP requests
app.get('/', (req, res) => {
  var host = req.get('host') // `req` is an object representing the HTTP request
  res.send("try visiting " + host + "/helloworld instead")
})

app.get('/helloworld', (req, res) => {
  // `res` is an object representing the HTTP response
  res.send("hello world")
})

// `.listen` is a convenience function that does some stuff to start an HTTP server
app.listen(3000, function () {
  console.log('Example app listening on port 3000!')
})

That’s all there is to it. Run node app.js on the command line, open up localhost:3000 in a browser, and you’re all set.

Bonus fun

res and req (the arguments passed into each callback) are pretty powerful objects. Built on Node’s own response and request objects, these objects come with helpful methods to read the request or alter the response body.

For example, you can have the response send back JSON instead of a regular String:

1
2
3
app.get('/json', (req, res) => {
  res.json({ msg: 'this is json' });
})

I really like the simple route layout that Express encourages. Each route is a simple combination of HTTP verb + path. (A REST best practice, incidentally: keep the URL the same but vary the verb.)

In the following example, when we load and submit an HTML form, we hit two different /upload endpoints.

1
2
3
4
5
6
7
8
9
10
11
12
13
app.get('/upload', (req, res) => {
  const form = `
    <form action='/upload' method='post'>
      Submit this form
      <input type='submit' />
    </form>
  `
  res.send(form)
})

app.post('/upload', (req, res) => {
  res.send('hello! upload complete')
})

Hello, World (of Express)!

React Testing, Part 2

Image: Quickmeme
My last post was titled “Testing React Router: Part 1.” Which seems to imply there’s a Part 2 about React Router. But I want to… er… re-route the conversation. This post is about testing React components and how integration tests can save the day.

The project was almost complete. My time as team captain was almost over. Victory was in sight…

My team had been working on Source Maps “Dragondrop,” a UI feature allowing users to drag and drop source maps to unminify Javascript error stack traces. As the project’s team captain, I had generated a list of test cases and scheduled a team “bug hunt” to go through them manually.

The test plan looked something like this:

  1. Happy path! Drag and drop the correct source map onto the stack trace. Verify unminified line #, column #, and source code.
  2. Drag and drop the wrong source map onto the stack trace. Verify error message banner saying wrong file.
  3. Drag and drop a source map with no source content. Verify warning banner saying no source content, but correctly unminified line # and column #.

Etc.

Our bug hunt revealed that most test cases passed! But it also revealed a subtly bug… and that final bug fix ballooned into taking multiple extra days.

How could we have caught this UI issue earlier? And what could we do to prevent regressions while we refactored the code to fix the problem?

Testing Front-End Applications Is About User Perspective

React child components re-render when there’s a state change in a parent component. In our app, these children were presentational components called StackTraceItems – the individual line items of a stack trace. The parent was StackTrace, the container component at the top level of the hierarchy, where we stored uploaded source maps as state.

Source maps are stored in StackTrace state.

Here was the problem: when a user dragged in the wrong source map, StackTrace stored the file, applied the source map to the minified stack trace, and then confirmed whether or not unminification had been successful. Even if it was not successful, the state change in StackTrace caused StackTraceItems to update as if a correct source map had been uploaded.

Adding insult to injury, all of our tests were passing.

Our tests were passing, all right, but they were all unit tests. All they did was confirm that components rendered properly given certain props, and that user interaction worked as expected. The problem we were facing was that, from a user perspective, the app looked broken.

How to Write Front-end Tests That Save You Time and Anxiety

0. Have the right tools

These are the libraries and tools that allowed us to write all the React tests we wanted:

1. Have a basic set of unit tests for every component

Unit tests are great for testing components in isolation. These tests tell you whether the component renders at all, and that it does X when you click it.

Unit tests should check that:

  • component shallowly renders, given props (Enzyme’s shallow)
  • user interaction works as expected
  • elements that you want to show/hide will appear or disappear depending on props

Keep unit tests basic. And don’t rely on unit tests alone.

2. Add integration tests for all major user flows

Integration tests are necessary for testing actual user experience. After all, your user is going to experience your app as an holistic piece of software, not in the form of isolated components.

If your app is structured to have just one source of truth – where high-level state changes trigger a cascade of updates to lower-level components – it’s easy to test.

Integration tests should:

  • deep-render your components (Enzyme’s mount)
  • call setState on your top-level stateful component to trigger the changes you want to test
  • check for props passed to presentational components, thereby validating what we want the user to see on the page

What clues do you find yourself looking for when you manually test something? What tells you that a code change worked or not? Check for the props behind those visual cues in your integration tests. Your tests should impersonate your user’s eyes.

“Unit testing is great: it’s the best way to see if an algorithm does the right thing every time, or to check our input validation logic, or data transformations, or any other isolated operation. Unit testing is perfect for fundamentals. But front-end code isn’t about manipulating data. It’s about user events and rendering the right views at the right time. Front-ends are about users.” – Toptal.com

3. Don’t leave integration tests until the end

We were about 2/3 of the way through the project when I wrote up our bug-hunt test plan. The team went through the test plan twice in QA bug hunts, where it was easy as pie to find the last remaining bugs and UX fixes.

But that test plan should have doubled as an outline for integration tests right away. In fact, writing the integration tests should have happened at about the same point in the project as coming up with the test plan. That way, all that high-level testing would have been automated for future use, and at a point in the project when we had a good sense of our app’s major user flows and potential pitfalls!

In Summary

Unit tests are a great baseline. But we needed integration tests to allow us to refactor boldly, as well as save us time in manual testing along the way.

Writing tests during a fast-paced project always feels like a roadblock on your journey down the yellow-brick road. But taking that time might be the only way you get back to Kansas all in one piece.

Testing React Router: Part 1

I’m a believer.

I have joined the ranks of those who see the URL as the One Almighty Source of Truth. In this vein, we use React Router to determine which components and data to show.

But even though I’m a believer, I’m also a cynic. Let’s put our faith in the URL to the test.

Why should we test our routing?

With the URL as the source of truth, we can expect the view to significantly change depending on the URL path or query params. Shouldn’t we have tests that ensure the correct components show up? Especially if you have complicated routing: nested routes, query params, optional routes, etc.

Initial exploration

The folks who wrote React Router wrote a set of tests that verify whether a matching route can be found for a given path.

For example, here’s a test that verifies that a path of /users will yield the correct set of matching routes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
routes = [
 RootRoute = {
    childRoutes: [
      UsersRoute = {
        path: 'users',
        indexRoute: (UsersIndexRoute = {}),
        childRoutes: [
          UserRoute = {
            path: ':userID',
...
]

describe('when the location matches an index route', function () {
    it('matches the correct routes', function (done) {
        matchRoutes(routes, createLocation('/users'), function (error, match) {
          expect(match).toExist()
          expect(match.routes).toEqual([ RootRoute, UsersRoute, UsersIndexRoute ])
          done()
        })
    })
    ...
See the full React Router test here.

I couldn’t find much out there on the Interwebz about testing React routes. So, my first step was to see if I could just get React Router’s “matching routes” test suite working for an existing app that has its own simple front-end routing.

It was a bit of a struggle.

The most important part was to convert the routes in routes.js to JSON instead of JSX. This is because React Router’s tests use a matchRoutes testing tool that rely on routes having a certain structure. Their test suite recreates a complicated nest of test routes inside the test itself. If I were writing my own routes test, it would be pretty annoying to have to update a handmade list of routes in the test every time the app’s routes changed. Writing routes as JSON will allow me to simply import my routes from the routes file into the test.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// in routes.js

export const Routes = [
    {
        component: App,
        onEnter: trackUserId,
        childRoutes: [
            {
                path: "/",
                component: "LandingPageComponent"
            },
            {
                path: "/buy",
                component: "BuyComponent"
            },
        ]
    }
]
1
2
3
4
5
6
7
// in __test__/routes_spec.js

import { Routes } from '../routes'

...

matchRoutes(Routes, createLocation('/buy'), function(error, match) { ... })

What to test?

I don’t want my routes test to test React Router’s logic – it’s not my place to make sure that React Router knows how to find a matching route from a given path. I want to test the logic that I’m creating for the app. So, what I do want to test is that all the right information is displayed on the page if I hit the “/” path versus the “/buy” path.

For example, I could check that loading “/buy” adds the SearchWidget and ShoppingCartWidget to the page, and that hitting the root “/” shows the FullPageSplashComponent.

But HOW to do that? Stay tuned, more to come.

3 Handy Spells for the Aspiring ES6 Wizard

Image: Harry Potter FanArt by fridouw

Hermione might have read a whole book on Arithmancy, but the spells that save the day always seem to boil down to Alohamora, Expelliarmus and Expecto Patronum.

Just like every wizard needs her shortlist of handy spells, there are a couple of new Javascript features that might just disarm Voldemort, save Hogwarts, and/or beautify your code every damn day.

Here are 3 particularly magical aspects of ES6 that will brighten your day faster than you can say Lumos!

Arrow functions

In addition to defining functions with the function keyword, you can now define functions using arrow notation.

Old way:

1
2
3
4
function doAThing() {
     console.log(Doing a thing now")
     console.log(“Done")
}

Arrow way:

1
2
3
4
doAThing = () => {
     console.log(Doing a thing now")
     console.log(“Done")
}

(If you’ve used lambdas in Java 8, the setup will feel familiar.)

Arrow functions have 2 simple benefits and 1 complicated benefit.

Simple benefit 1: less writing

We get rid of the function keyword! So fly, so hip!

Simple benefit 2: implicit returns

You can write one-liners like this, without curly braces: returnAString = () => “Here's your string”

Complicated benefit: no re-binding of this

Arrow functions should not be used as a one-to-one replacement for regular functions because they affect the scope of this.

In old-school Javascript functions, this refers to the local scope.

1
2
3
4
5
6
<script>
const div = document.querySelector(.myDiv)
div.addEventListener(click, function() {
     console.log(this) // —> <div class=‘myDiv'> Hi </div>
})
</script>

An arrow function doesn’t create its own context, so the value of this is simply inherited from the parent scope.

1
2
3
4
5
6
<script>
const div = document.querySelector(div)
div.addEventListener(click, () => {
     console.log(this) // —> window
}
</script>

So, be careful about using arrow functions, particularly when using event listeners.

Object destructuring

Object destructuring is super handy. It saves you a lot of writing, and it saves a lot of headaches when it comes to passing arguments into functions and returning values from functions.

Less writing

Old way:

1
2
var thing1 = obj.thing1
var thing2 = obj.thing2

New way:

1
const { thing1, thing2 } = obj

ES6 simply looks inside obj for properties with names matching thing1 and thing2 and assigns the correct value.

Multiple return values

Object destructuring helps us unpack multiple values returned from a function.

1
2
3
4
5
6
7
8
function returnObj( ) {
     return { thing1: 'red fish', thing2: 'blue fish' }
}

const { thing1, thing2 } = returnObj();

console.log(thing1) // 'red fish'
console.log(thing2)  // 'blue fish'

Passing in arguments

1
2
3
4
5
function tipCalculator( { total, tip = 0.20, tax = 0.11 } ) {
     return total * tip + total * tax + total
}

const bill = tipCalculator( {tax: 0.14, total: 200} )

Here, we pass in an object to the function tipCalculator and the argument gets destructured inside the function. This allows arguments to be passed in any order — and you can even rely on defaults so you don’t have to pass in every value.

Var/let/const: variable scoping

ES6’s variable declaration keywords var, let and const allow you to declare variables with a variety of scoping and overwriting rules.

  • var is function-scoped

    • if not declared inside a function, the variable is global
  • let has a block scope

    • the variable is only defined inside whatever block it’s in (including if blocks)
    • you cannot define same variable multiple times using let in the same scope
  • const (“constant”) variables cannot be re-assigned a value

    • properties of a const can be changed though!

The rule of thumb I’ve been using: by default, assign your variables using const unless you know the value is going to change. (I constantly use const!)

This way, I never accidentally overwrite a variable whose value I never wanted to change, and I am forced to make a conscious decision at the outset about how to use variables.

If I want to declare a variable and set its value later (based on if/else logic, for example), I use let. (I almost never use var.)

Summary

ES6 has a ton of awesome, time-saving shortcuts built into it – shortcuts that any worthy wizard would make sure to add to her spellbook. These magical tools aren’t just esoteric/academic fluff — I use them all the time, and I’m a first year myself.

P.S. If you want a tutorial overview of ES6, I’m really enjoying es6.io by Wes Bos (who inspired some of my examples above). The tutorial is easy to follow, and it not only teaches you about ES6’s new features but is basically a Javascript crash course in itself. It’ll totally transfigure your ES6 familiarity!

Navigation Timing API

Learning about the Navigation Timing API is surprisingly similar to learning “how the Internet works.” In fact, if I was asked to sketch “how the Internet works” as a job interview question, I’d probably draw a timeline of events pretty similar to this:

Image: W3C Navigation Timing

Which, of course, is how the Navigation Timing API views the world, too.

The Navigation Timing API is an API that tracks navigation and page load timing. It’s especially useful for gathering information about the wait time perceived by the end user. Timestamps from along the page-load timeline (see diagram above) are stored in an object in the browser window, window.performance.timing. Doing some simple math using those timestamps can reveal the wait times hidden in parts of the process between clicking a link and seeing the fully-loaded page.

The navigationStart timestamp is collected the moment that the previous document is unloaded and the new page fetch begins. The browser looks in the cache, does a DNS lookup if it can’t find an entry, connects to the identified server, sends a request, receives the response (in bytes), and then processes the response into the fully-loaded page. All of these steps, and the transitions between them, are identified by events fired off by the browser. The Navigation Timing API notes these events and stores them as start and end times inside window.performance.timing.

The Navigation Timing API ends with the window’s load event, which fires when the DOM is complete and all images, scripts and links have loaded. However, many webpages continue to fire off AJAX calls after the page has loaded. AJAX calls might even replace additional page loads, as in the case of single-page apps where the full page loads only once.

These asynchronous, post-DOM-load requests might make it harder to understand total page load performance, but it’s not too difficult. How long do you wait for an AJAX response? How long does each Javascript callback (a function waiting for an asynchronous request to complete) take to finish running? These wait times can be tracked, too – although not with the Navigation Timing API. You might want to check out New Relic Browser Monitoring

Image: based on screenshot from New Relic Browser - Session Traces

Various Sources:

Child or Prop?

Child or prop? Puppy or bagel?

Image: Imgur

Once we get past the amazing likeness of puppies and bagels: let’s talk about the similarities and differences between React children and props.

I initially didn’t understand the difference. Why have both, if they’re just two different ways to pass around information? When should I use props, and when should I use children?

Parents vs. Owners

The key to understanding the distinction is understanding the difference between the owner-ownee relationship and the parent-child relationship.

The parent-child relationship is a familiar one. For example, a classic DOM layout:

1
2
3
4
<ul id="parent">
    <li id="child1">Child 1</li>
    <li id="child2">Child 2<li>
</ul>

The list elements are children of the unordered list. They are literally nested inside the <ul>. In React, too, a child component is literally nested inside the parent: <Parent> <Child /> </Parent>

The owner-ownee relationship is a little different. In Facebook’s React docs the definition is this:

An owner is the component that sets the props of other components. More formally, if a component X is created in component Y’s render( ) method, it is said that X is owned by Y.

For example:

1
2
3
4
5
6
7
8
9
class Engineer extends React.Component {
    render() {
        return(
            <pre>
                <CodeSnippet language={this.props.language} codeSmell={this.props.smell} />
            </pre>
        );
    }
}

Here, the CodeSnippet is owned by the Engineer, who renders it and is reponsible for its props. (Let that be a lesson to all of us.)

A parent is not the same as an owner. In the above example, pre is the parent of CodeSnippet, because a CodeSnippet is nested inside of a pre tag. But pre is not the owner.

How to pass in children

This was perhaps the source of my original confusion about children vs. props: children are actually props. But they are a special kind of prop, with their own set of utilities. The special prop is this.props.children.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class HasManyChildren extends React.Component {
  render() {
      return (
        <div id="parent">
            {this.props.children}
        </div>
      )
  }
}

ReactDOM.render(
  <HasManyChildren>
        <Child value="1" />
        <Child value="2" />
  </HasManyChildren>,
    document.getElementById('container')
);

Two children are being passed into the parent HasManyChildren – and can be accessed with this.props.children.

Which one??

So when to use children and when to use regular props as a way to pass down information? The best answer I’ve heard so far:

children are for dynamic content

Just like with a Ruby yield, you can pass in any children to a component and it can access them with {this.props.children}. For example, maybe you want to render a Ninja component when the user hits the /ninja route, but render a Pirate component if the user lands on /pirate. You can use this.props.children to render either set of components in the React Router.

1
2
3
4
5
6
7
8
9
10
11
12
13
// in routes
<Route path="ninja" component={NinjaComponent}>
<Route path="pirate" component={PirateComponent}>

// in app.js
render() {
    return (
      <div>
        <h1> Pirate or Ninja? </h1>
        {this.props.children} // renders either PirateComponent or NinjaComponent
      </div>
    );
}
props are like arguments that must be passed in

Props, however, are the pieces required for components to render. Just like a method’s arguments are required for a method call, a component must receive props before it can mount. If a Pirate component must have a priate flag to be rendered, you’d better pass one in, matey:

1
<Pirate pirateFlag={this.props.pirateFlag} shipName={this.props.shipName} />

Sources