All Posts

Node Modules

This week I worked on and helped publish a node module for pushing source maps to a public API. It’s only the second time I’ve worked on a from-scratch node module, and it’s enlightening — like pulling back the curtain to reveal the truth behind the Wizard of Oz.

Node modules aren’t magical (though they’re no humbugs either). They fulfill a pretty specific purpose: a way to encapsulate reusable code in a way that’s easily managed.

Why node modules matter

Before node modules were really a thing, reusable code snippets were often pulled out into Immediately Invoked Function Expressions (IIFEs) or stored inside of variables (the Revealing Module pattern).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
var sayAName = (function () {
     var privateName = 'name is not set yet';

     function publicSetName (name) {
          privateName = name;
     }

     function publicSayName () {
          console.log('Hi, ' + privateName);
     }

     return {
          setName: publicSetName,
          sayName: publicSayName
     };
})()

sayAName.setName('emily')
sayAName.sayName()
// 'emily'
Example inspired by Addy Osmani’s Design Patterns

But with this pattern, you don’t have much ability to manage dependencies or make explicit versioning changes. That’s where node modules are so handy. Not only is the code easily reusable and self-contained, but a version change is as easy as bumping a number in the package.json.

Node modules come in a couple different formats, but the new ES6 format is the one I’m most familiar with.

1
2
3
4
5
6
7
// lib/bye.js
export function sayGoodbye(){
  console.log('Bye');
}

// app.js
import { sayGoodbye } from './lib/bye'

ES6’s native module format uses export and import tokens to share consts, functions, and objects between files.

(I’ve also used module.exports and require, the CommonJS module format, in various projects depending on whether we’re transpiling to support ES5.)

1
2
3
4
5
6
var dep1 = require('./dep1');
var dep2 = require('./dep2');

module.exports = function(){
  // ...
}

Publishing node modules

Publishing a node module to npm is a pretty sweet feeling – like getting a book published, except much easier. Although versioning comes with its own quirks.

When working on the node module for pushing source maps, we made several small changes to the 4.0.1 release to fix some tiny bugs. We published those changes to npm as pre-releases, indicating that the changes were not yet stable. The pre-release version numbers looked like this: 4.0.1-2

When you installed @latest or an unspecified version number, though, you got the pre-release instead of the release (4.0.1). That’s not ideal behavior; you want people to be able to opt into prereleases.

Here’s the trick: when publishing to npm, you can tag a release with prerelease or ‘next’ explicitly, which will override the default tag of latest. Then you can make incremental changes to your heart’s content until it’s time for the next real version bump!

Sources

written

Gosh Yarn It

You’ll never believe it… A hot new technology has appeared on the Javascript scene.

What is yarn?

Yarn is a replacement for the npm package manager, the popular tool used to handle the ~300,000 packages in the npm registry.

While Yarn is designed completely replace the npm workflow, it works in concert with the existing package.json: adding or removing packages with Yarn will update both the package.json and yarn.lock.

The main commands you’ll need:

1
2
3
4
yarn  // same as npm install
yarn add  // same as npm i --save
yarn add --dev  // same as npm i --save-dev
yarn remove  // same as npm uninstall

Why bother to switch from tried-and-true npm?

Locking down the right versions of all your dependencies can be tricky.

For example: I accidentally upgraded react-addons-test-utils to a new version by adding it directly with yarn add react-addons-test-utils. This got the latest version of the package, which meant it no longer had the same version number as react and react-dom – we had specifically pinned those at an earlier minor version.

When I ran the tests, this error popped up: Module not found: Error: Cannot resolve module 'react-dom/lib/ReactTestUtils' in /Users/ebookstein/newrelic/browser/overview-page/node_modules/react-addons-test-utils (module was moved between versions of react-dom)

To pin react-addons-test-utils at a specific version, I added the version to the package name like this: yarn add react-addons-test-utils@15.3.0.

Of course, locking down dependencies at the right version numbers is something that npm did for us as well. Why replace it?

There are some minor benefits that came across right away from using yarn instead of npm. For one thing, I no longer have to remember --save – think of all the time you’ll save not pushing branches up to Github only to have your tests fail!

But there are some big-picture benefits too. We were using npm shrinkwrap to stabilize versions and dependencies. But shrinkwrap easily gets out of sync with package.json if you forget to run npm shrinkwrap, and when it fails, it fails silently.

If you want the full-length list of big-picture reasons that Facebook came up with, check out their blog post about yarn.

Switching to yarn: a few tricky bits

Installing yarn itself is easy.

brew install yarn

Then go to your project directory and run yarn to install all the packages and generate a yarn.lock (similar to Bundler’s Gemfile.lock for Ruby).

Migrating from npm to yarn is not totally seamless, however. For some reason, a few packages don’t transition as easily. In particular I had issues with node-sass and phantom-js. In one case, the issue was resolved by specifically adding the module with yarn (yarn add node-sass). In the case of phantom-js, though, we had to add a post-install script to our package.json that would run a phantomjs install script.

Also, if you use Jenkins or other tools that keep a saved copy of your node_modules, you might have to clear out the old node_modules and reinstall.

For Jenkins specifically, try:

  1. clearing the workspace,
  2. killing the instance of Jenkins, or
  3. add a post-build step to delete the workspace every time (which therefore runs a complete installation every time)

Fun bits

Fun fact: yarn preserves many helpful features of npm, like symlinking.

For example, if you wanted to use your personal copy of react instead of Facebook’s react (pretend!), then you could run yarn link inside of the react directory, then yarn link react in your project directory. This would then substitute a link to your personal react copy for every require(react).

Tying up loose ends

So far, it’s been fun to knit yarn into our development process! Though I’m still working through some of the knots…

written

Whoop Whoop! Event Loop!

Last week a friend asked me, Is Node is multi-threaded? How would Node do asynchronous work if it’s not multi-threaded? Good question! I didn’t know the answer.

But as it turns out, Node (and Javascript in general) is single-threaded. This means there is only one process, one flow of control. Instead of having multiple threads to process simultaneous work, the Node runtime environment has an ecosystem of interconnected data structures that preserve a fast workflow. At the conceptual center of these structures is the event loop.

All modern Javascript engines rely on an event loop concurrency model, not just Node. So, browser-lovers, read on.

A stack, a queue, an event loop - oh my!

Image: Mozilla Developers Network (annotations mine)

This is a simplified diagram of a Javascript runtime. (I added the “Web APIs” box for browser-related completeness.)

Start with the stack. You’ve probably been exposed to the stack before, especially in the form of stack traces for Javascript errors. A stack trace, like the stack itself, is made of the series of functions and local variables that have been pushed onto the stack. Each of those functions and its local variables make up a frame in the call stack.

Functions are added to the stack from the main Javascript program – or, from the message queue. The queue contains callback functions that are ready to be, well, “called back.” In the browser, callbacks may be associated with DOM events like “click” or “hover”; or, for both backend and frontend Javascript, they may be associated with resolved Promises. In fact, they may contain responses from any number of Web APIs, like timer or the infamous XMLHttpRequest. (I didn’t realize those were separate APIs until I started learning about the event loop!) When a message is processed, its callback function gets called and thereby is added to the stack.

The event loop is the loop that processes messages in the queue such that the callback functions get pushed onto the stack.

For an animated demo of how the event loop works in the browser, check out Philip Roberts' 2014 JSConf talk.

Image: Philip Roberts, 2014

Much non-blocking

A very interesting property of the event loop model is that JavaScript, unlike a lot of other languages, never blocks. Handling I/O is typically performed via events and callbacks, so when the application is waiting for an IndexedDB query to return or an XHR request to return, it can still process other things like user input. – MDN

Thanks to the event loop model, Javascript is non-blocking. That means you don’t have to wait for slow tasks to complete before processing other tasks. We just keep pushing or popping frames on and off the stack; some of those function calls add work to someone else’s plate (by calling a web API or making HTTP requests, for example), outside of our single process; we receive callbacks in our queue when that external work is done; and whenever the stack has capacity, we pull in those callbacks.

So, even though Javascript is single-threaded, the event loop model allows it to be fast and non-blocking. We just keep grabbing one thing at a time, one thing at a time, one thing at a time, keep on going forever. We are a machine.

Resources

written

Testing an Express Server

A couple weeks ago, I wrote a suite of tests for a new Node-Express service.

Up until that point, all the JS testing I’d done was for React components. Writing tests for an Express app is a little different, but was actually really fun and easy with the right tools. So here’s a little write-up on some testing tools and tips for backend Javascript!

Tools

You will need one or more of the following tools:

  • Mocha: a test runner. In package.json, simply add "test": "mocha" under “scripts” and all tests in the test directory will run.
  • Chai: an assertions library for Javascript, offering both an expect API and an assert API.
  • Supertest: an HTTP testing library that lets you easily send HTTP requests to a local server.
  • Node-tap: a test library implementing the Test-Anything-Protocol for Node.

Most Valuable Player: Supertest

My favorite new tool on this list is the Supertest library. It is SO easy to hit each Express endpoint with a Supertest request and examine the response.

The way it works: require Supertest and pass it an Express application. This returns a supertest object onto which other methods can be chained. For example, you can set attributes with “get(‘/path’)” or “set(‘Content-Type’, ‘application/json’)” that modify both the object and your eventual request to the server. Expectations for the response are also chained onto the supertest object.

With mocha/chai/supertest:

1
2
3
4
5
6
7
8
9
const request = require(supertest)
const app = require('../app')
it('gets a 200 response', function (done) {
        request(app) // this returns a SuperTest object
        .get(/data)
        .set('Content-Type', 'application/json')
        .expect(200)
        .end(done)
})

Asynchronicity

One interesting aspect of writing tests for an Express server is the fact that your tests must run asynchronously. After all, you’re sending a request to your server; if your tests proceeded onwards full steam ahead, you might run into an expectation that checks for a response before the response has even arrived. Mocha also needs wait for the current test to finish before heading off to other tests.

Mocha has two ways of handling asynchronous tests:

  • pass in a done callback that is called when the test ends, or
  • use Mocha’s built-in Promise handling.

Now that Mocha actually handles Promises in-house, it’s considered better practice to use .then and .catch rather than .end(done).

With mocha/supertest:

1
2
3
4
5
6
7
8
9
10
const request = require(supertest)
const app = require('../app')
it('gets a 200 response', function () {
        request(app)
        .get(/data)
        .then((res) => {
           if (!(res.status === 200)) { throw new Error } // make sure to actually throw an error so the test fails
        })
        .catch(err => {throw err})
})

Node-tap can also be used this way, with .then and .catch callbacks on a Promise.

node-tap/supertest:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
const request = require(supertest)
const test = require('tap').test
const app = require('../app')
test(gets a 200 response', (t) => {
  t.plan(1)
  request(app)
    .get(‘/data')
    .then((res) => {
      t.equal(res.status, 200)
    })
    .catch((err) => {
      t.fail(err)
    })
})

Now sit back, relax, and watch your tests run.

written

Express Is Adorbs

Express is adorable. It’s the tiniest, cutest little web framework.

Besides adorable, Express is a lightweight web framework for Node.js. It addresses a few basic needs of Node applications.

Node Express
need to implement whole HTTP server convenience methods for easy-to-start server
need a router to map requests to request handlers routes requests to designated handler based on HTTP verb + path
need to actually handle requests callbacks handle requests!

Hello world

A “hello world” Express server is very simple. Besides the standard package.json + node_modules (libraries) that come with a new Node project, here’s all it took:

$ npm install express --save

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// app.js

var express = require('express')
var app = express() // creates an Express application

// `app` has methods like `.get` for routing HTTP requests
app.get('/', (req, res) => {
  var host = req.get('host') // `req` is an object representing the HTTP request
  res.send("try visiting " + host + "/helloworld instead")
})

app.get('/helloworld', (req, res) => {
  // `res` is an object representing the HTTP response
  res.send("hello world")
})

// `.listen` is a convenience function that does some stuff to start an HTTP server
app.listen(3000, function () {
  console.log('Example app listening on port 3000!')
})

That’s all there is to it. Run node app.js on the command line, open up localhost:3000 in a browser, and you’re all set.

Bonus fun

res and req (the arguments passed into each callback) are pretty powerful objects. Built on Node’s own response and request objects, these objects come with helpful methods to read the request or alter the response body.

For example, you can have the response send back JSON instead of a regular String:

1
2
3
app.get('/json', (req, res) => {
  res.json({ msg: 'this is json' });
})

I really like the simple route layout that Express encourages. Each route is a simple combination of HTTP verb + path. (A REST best practice, incidentally: keep the URL the same but vary the verb.)

In the following example, when we load and submit an HTML form, we hit two different /upload endpoints.

1
2
3
4
5
6
7
8
9
10
11
12
13
app.get('/upload', (req, res) => {
  const form = `
    <form action='/upload' method='post'>
      Submit this form
      <input type='submit' />
    </form>
  `
  res.send(form)
})

app.post('/upload', (req, res) => {
  res.send('hello! upload complete')
})

Hello, World (of Express)!

written

React Testing, Part 2

Image: Quickmeme
My last post was titled “Testing React Router: Part 1.” Which seems to imply there’s a Part 2 about React Router. But I want to… er… re-route the conversation. This post is about testing React components and how integration tests can save the day.

The project was almost complete. My time as team captain was almost over. Victory was in sight…

My team had been working on Source Maps “Dragondrop,” a UI feature allowing users to drag and drop source maps to unminify Javascript error stack traces. As the project’s team captain, I had generated a list of test cases and scheduled a team “bug hunt” to go through them manually.

The test plan looked something like this:

  1. Happy path! Drag and drop the correct source map onto the stack trace. Verify unminified line #, column #, and source code.
  2. Drag and drop the wrong source map onto the stack trace. Verify error message banner saying wrong file.
  3. Drag and drop a source map with no source content. Verify warning banner saying no source content, but correctly unminified line # and column #.

Etc.

Our bug hunt revealed that most test cases passed! But it also revealed a subtly bug… and that final bug fix ballooned into taking multiple extra days.

How could we have caught this UI issue earlier? And what could we do to prevent regressions while we refactored the code to fix the problem?

Testing Front-End Applications Is About User Perspective

React child components re-render when there’s a state change in a parent component. In our app, these children were presentational components called StackTraceItems – the individual line items of a stack trace. The parent was StackTrace, the container component at the top level of the hierarchy, where we stored uploaded source maps as state.

Source maps are stored in StackTrace state.

Here was the problem: when a user dragged in the wrong source map, StackTrace stored the file, applied the source map to the minified stack trace, and then confirmed whether or not unminification had been successful. Even if it was not successful, the state change in StackTrace caused StackTraceItems to update as if a correct source map had been uploaded.

Adding insult to injury, all of our tests were passing.

Our tests were passing, all right, but they were all unit tests. All they did was confirm that components rendered properly given certain props, and that user interaction worked as expected. The problem we were facing was that, from a user perspective, the app looked broken.

How to Write Front-end Tests That Save You Time and Anxiety

0. Have the right tools

These are the libraries and tools that allowed us to write all the React tests we wanted:

1. Have a basic set of unit tests for every component

Unit tests are great for testing components in isolation. These tests tell you whether the component renders at all, and that it does X when you click it.

Unit tests should check that:

  • component shallowly renders, given props (Enzyme’s shallow)
  • user interaction works as expected
  • elements that you want to show/hide will appear or disappear depending on props

Keep unit tests basic. And don’t rely on unit tests alone.

2. Add integration tests for all major user flows

Integration tests are necessary for testing actual user experience. After all, your user is going to experience your app as an holistic piece of software, not in the form of isolated components.

If your app is structured to have just one source of truth – where high-level state changes trigger a cascade of updates to lower-level components – it’s easy to test.

Integration tests should:

  • deep-render your components (Enzyme’s mount)
  • call setState on your top-level stateful component to trigger the changes you want to test
  • check for props passed to presentational components, thereby validating what we want the user to see on the page

What clues do you find yourself looking for when you manually test something? What tells you that a code change worked or not? Check for the props behind those visual cues in your integration tests. Your tests should impersonate your user’s eyes.

“Unit testing is great: it’s the best way to see if an algorithm does the right thing every time, or to check our input validation logic, or data transformations, or any other isolated operation. Unit testing is perfect for fundamentals. But front-end code isn’t about manipulating data. It’s about user events and rendering the right views at the right time. Front-ends are about users.” – Toptal.com

3. Don’t leave integration tests until the end

We were about 2/3 of the way through the project when I wrote up our bug-hunt test plan. The team went through the test plan twice in QA bug hunts, where it was easy as pie to find the last remaining bugs and UX fixes.

But that test plan should have doubled as an outline for integration tests right away. In fact, writing the integration tests should have happened at about the same point in the project as coming up with the test plan. That way, all that high-level testing would have been automated for future use, and at a point in the project when we had a good sense of our app’s major user flows and potential pitfalls!

In Summary

Unit tests are a great baseline. But we needed integration tests to allow us to refactor boldly, as well as save us time in manual testing along the way.

Writing tests during a fast-paced project always feels like a roadblock on your journey down the yellow-brick road. But taking that time might be the only way you get back to Kansas all in one piece.

written

Testing React Router: Part 1

I’m a believer.

I have joined the ranks of those who see the URL as the One Almighty Source of Truth. In this vein, we use React Router to determine which components and data to show.

But even though I’m a believer, I’m also a cynic. Let’s put our faith in the URL to the test.

Why should we test our routing?

With the URL as the source of truth, we can expect the view to significantly change depending on the URL path or query params. Shouldn’t we have tests that ensure the correct components show up? Especially if you have complicated routing: nested routes, query params, optional routes, etc.

Initial exploration

The folks who wrote React Router wrote a set of tests that verify whether a matching route can be found for a given path.

For example, here’s a test that verifies that a path of /users will yield the correct set of matching routes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
routes = [
 RootRoute = {
    childRoutes: [
      UsersRoute = {
        path: 'users',
        indexRoute: (UsersIndexRoute = {}),
        childRoutes: [
          UserRoute = {
            path: ':userID',
...
]

describe('when the location matches an index route', function () {
    it('matches the correct routes', function (done) {
        matchRoutes(routes, createLocation('/users'), function (error, match) {
          expect(match).toExist()
          expect(match.routes).toEqual([ RootRoute, UsersRoute, UsersIndexRoute ])
          done()
        })
    })
    ...
See the full React Router test here.

I couldn’t find much out there on the Interwebz about testing React routes. So, my first step was to see if I could just get React Router’s “matching routes” test suite working for an existing app that has its own simple front-end routing.

It was a bit of a struggle.

The most important part was to convert the routes in routes.js to JSON instead of JSX. This is because React Router’s tests use a matchRoutes testing tool that rely on routes having a certain structure. Their test suite recreates a complicated nest of test routes inside the test itself. If I were writing my own routes test, it would be pretty annoying to have to update a handmade list of routes in the test every time the app’s routes changed. Writing routes as JSON will allow me to simply import my routes from the routes file into the test.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// in routes.js

export const Routes = [
    {
        component: App,
        onEnter: trackUserId,
        childRoutes: [
            {
                path: "/",
                component: "LandingPageComponent"
            },
            {
                path: "/buy",
                component: "BuyComponent"
            },
        ]
    }
]
1
2
3
4
5
6
7
// in __test__/routes_spec.js

import { Routes } from '../routes'

...

matchRoutes(Routes, createLocation('/buy'), function(error, match) { ... })

What to test?

I don’t want my routes test to test React Router’s logic – it’s not my place to make sure that React Router knows how to find a matching route from a given path. I want to test the logic that I’m creating for the app. So, what I do want to test is that all the right information is displayed on the page if I hit the “/” path versus the “/buy” path.

For example, I could check that loading “/buy” adds the SearchWidget and ShoppingCartWidget to the page, and that hitting the root “/” shows the FullPageSplashComponent.

But HOW to do that? Stay tuned, more to come.

written

3 Handy Spells for the Aspiring ES6 Wizard

Image: Harry Potter FanArt by fridouw

Hermione might have read a whole book on Arithmancy, but the spells that save the day always seem to boil down to Alohamora, Expelliarmus and Expecto Patronum.

Just like every wizard needs her shortlist of handy spells, there are a couple of new Javascript features that might just disarm Voldemort, save Hogwarts, and/or beautify your code every damn day.

Here are 3 particularly magical aspects of ES6 that will brighten your day faster than you can say Lumos!

Arrow functions

In addition to defining functions with the function keyword, you can now define functions using arrow notation.

Old way:

1
2
3
4
function doAThing() {
     console.log(Doing a thing now")
     console.log(“Done")
}

Arrow way:

1
2
3
4
doAThing = () => {
     console.log(Doing a thing now")
     console.log(“Done")
}

(If you’ve used lambdas in Java 8, the setup will feel familiar.)

Arrow functions have 2 simple benefits and 1 complicated benefit.

Simple benefit 1: less writing

We get rid of the function keyword! So fly, so hip!

Simple benefit 2: implicit returns

You can write one-liners like this, without curly braces: returnAString = () => “Here's your string”

Complicated benefit: no re-binding of this

Arrow functions should not be used as a one-to-one replacement for regular functions because they affect the scope of this.

In old-school Javascript functions, this refers to the local scope.

1
2
3
4
5
6
<script>
const div = document.querySelector(.myDiv)
div.addEventListener(click, function() {
     console.log(this) // —> <div class=‘myDiv'> Hi </div>
})
</script>

An arrow function doesn’t create its own context, so the value of this is simply inherited from the parent scope.

1
2
3
4
5
6
<script>
const div = document.querySelector(div)
div.addEventListener(click, () => {
     console.log(this) // —> window
}
</script>

So, be careful about using arrow functions, particularly when using event listeners.

Object destructuring

Object destructuring is super handy. It saves you a lot of writing, and it saves a lot of headaches when it comes to passing arguments into functions and returning values from functions.

Less writing

Old way:

1
2
var thing1 = obj.thing1
var thing2 = obj.thing2

New way:

1
const { thing1, thing2 } = obj

ES6 simply looks inside obj for properties with names matching thing1 and thing2 and assigns the correct value.

Multiple return values

Object destructuring helps us unpack multiple values returned from a function.

1
2
3
4
5
6
7
8
function returnObj( ) {
     return { thing1: 'red fish', thing2: 'blue fish' }
}

const { thing1, thing2 } = returnObj();

console.log(thing1) // 'red fish'
console.log(thing2)  // 'blue fish'

Passing in arguments

1
2
3
4
5
function tipCalculator( { total, tip = 0.20, tax = 0.11 } ) {
     return total * tip + total * tax + total
}

const bill = tipCalculator( {tax: 0.14, total: 200} )

Here, we pass in an object to the function tipCalculator and the argument gets destructured inside the function. This allows arguments to be passed in any order — and you can even rely on defaults so you don’t have to pass in every value.

Var/let/const: variable scoping

ES6’s variable declaration keywords var, let and const allow you to declare variables with a variety of scoping and overwriting rules.

  • var is function-scoped

    • if not declared inside a function, the variable is global
  • let has a block scope

    • the variable is only defined inside whatever block it’s in (including if blocks)
    • you cannot define same variable multiple times using let in the same scope
  • const (“constant”) variables cannot be re-assigned a value

    • properties of a const can be changed though!

The rule of thumb I’ve been using: by default, assign your variables using const unless you know the value is going to change. (I constantly use const!)

This way, I never accidentally overwrite a variable whose value I never wanted to change, and I am forced to make a conscious decision at the outset about how to use variables.

If I want to declare a variable and set its value later (based on if/else logic, for example), I use let. (I almost never use var.)

Summary

ES6 has a ton of awesome, time-saving shortcuts built into it – shortcuts that any worthy wizard would make sure to add to her spellbook. These magical tools aren’t just esoteric/academic fluff — I use them all the time, and I’m a first year myself.

P.S. If you want a tutorial overview of ES6, I’m really enjoying es6.io by Wes Bos (who inspired some of my examples above). The tutorial is easy to follow, and it not only teaches you about ES6’s new features but is basically a Javascript crash course in itself. It’ll totally transfigure your ES6 familiarity!

written

Navigation Timing API

Learning about the Navigation Timing API is surprisingly similar to learning “how the Internet works.” In fact, if I was asked to sketch “how the Internet works” as a job interview question, I’d probably draw a timeline of events pretty similar to this:

Image: W3C Navigation Timing

Which, of course, is how the Navigation Timing API views the world, too.

The Navigation Timing API is an API that tracks navigation and page load timing. It’s especially useful for gathering information about the wait time perceived by the end user. Timestamps from along the page-load timeline (see diagram above) are stored in an object in the browser window, window.performance.timing. Doing some simple math using those timestamps can reveal the wait times hidden in parts of the process between clicking a link and seeing the fully-loaded page.

The navigationStart timestamp is collected the moment that the previous document is unloaded and the new page fetch begins. The browser looks in the cache, does a DNS lookup if it can’t find an entry, connects to the identified server, sends a request, receives the response (in bytes), and then processes the response into the fully-loaded page. All of these steps, and the transitions between them, are identified by events fired off by the browser. The Navigation Timing API notes these events and stores them as start and end times inside window.performance.timing.

The Navigation Timing API ends with the window’s load event, which fires when the DOM is complete and all images, scripts and links have loaded. However, many webpages continue to fire off AJAX calls after the page has loaded. AJAX calls might even replace additional page loads, as in the case of single-page apps where the full page loads only once.

These asynchronous, post-DOM-load requests might make it harder to understand total page load performance, but it’s not too difficult. How long do you wait for an AJAX response? How long does each Javascript callback (a function waiting for an asynchronous request to complete) take to finish running? These wait times can be tracked, too – although not with the Navigation Timing API. You might want to check out New Relic Browser Monitoring

Image: based on screenshot from New Relic Browser - Session Traces

Various Sources:

written

Child or Prop?

Child or prop? Puppy or bagel?

Image: Imgur

Once we get past the amazing likeness of puppies and bagels: let’s talk about the similarities and differences between React children and props.

I initially didn’t understand the difference. Why have both, if they’re just two different ways to pass around information? When should I use props, and when should I use children?

Parents vs. Owners

The key to understanding the distinction is understanding the difference between the owner-ownee relationship and the parent-child relationship.

The parent-child relationship is a familiar one. For example, a classic DOM layout:

1
2
3
4
<ul id="parent">
    <li id="child1">Child 1</li>
    <li id="child2">Child 2<li>
</ul>

The list elements are children of the unordered list. They are literally nested inside the <ul>. In React, too, a child component is literally nested inside the parent: <Parent> <Child /> </Parent>

The owner-ownee relationship is a little different. In Facebook’s React docs the definition is this:

An owner is the component that sets the props of other components. More formally, if a component X is created in component Y’s render( ) method, it is said that X is owned by Y.

For example:

1
2
3
4
5
6
7
8
9
class Engineer extends React.Component {
    render() {
        return(
            <pre>
                <CodeSnippet language={this.props.language} codeSmell={this.props.smell} />
            </pre>
        );
    }
}

Here, the CodeSnippet is owned by the Engineer, who renders it and is reponsible for its props. (Let that be a lesson to all of us.)

A parent is not the same as an owner. In the above example, pre is the parent of CodeSnippet, because a CodeSnippet is nested inside of a pre tag. But pre is not the owner.

How to pass in children

This was perhaps the source of my original confusion about children vs. props: children are actually props. But they are a special kind of prop, with their own set of utilities. The special prop is this.props.children.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class HasManyChildren extends React.Component {
  render() {
      return (
        <div id="parent">
            {this.props.children}
        </div>
      )
  }
}

ReactDOM.render(
  <HasManyChildren>
        <Child value="1" />
        <Child value="2" />
  </HasManyChildren>,
    document.getElementById('container')
);

Two children are being passed into the parent HasManyChildren – and can be accessed with this.props.children.

Which one??

So when to use children and when to use regular props as a way to pass down information? The best answer I’ve heard so far:

children are for dynamic content

Just like with a Ruby yield, you can pass in any children to a component and it can access them with {this.props.children}. For example, maybe you want to render a Ninja component when the user hits the /ninja route, but render a Pirate component if the user lands on /pirate. You can use this.props.children to render either set of components in the React Router.

1
2
3
4
5
6
7
8
9
10
11
12
13
// in routes
<Route path="ninja" component={NinjaComponent}>
<Route path="pirate" component={PirateComponent}>

// in app.js
render() {
    return (
      <div>
        <h1> Pirate or Ninja? </h1>
        {this.props.children} // renders either PirateComponent or NinjaComponent
      </div>
    );
}
props are like arguments that must be passed in

Props, however, are the pieces required for components to render. Just like a method’s arguments are required for a method call, a component must receive props before it can mount. If a Pirate component must have a priate flag to be rendered, you’d better pass one in, matey:

1
<Pirate pirateFlag={this.props.pirateFlag} shipName={this.props.shipName} />

Sources

written

AlterConf

On tech diversity, structural change, and money

Related to the 3 next steps I outline in my talk:

  1. Social Justice Fund NW grantees, for inspiration

  2. Resource Generation && How To Give Boldly From Earned Income, for resources and guidelines on giving plans

  3. Portland Resource Generation chapter to find out more about the upcoming discussion group


Putting this talk together for AlterConf Portland has been an interesting journey. I’ll admit that for the two weeks before the conference I felt a lot of anxiety about giving this talk. Not only does the presentation touch on a few subjects that are taboo, at least in white middle-class society (race, wealth, class privilege), but it also required some public vulnerability from me around my class and race privilege. I also really wanted to do justice to these huge topics – which felt challenging since I am still learning so much myself.

Since I am a white person speaking about race, and am someone with an upper-middle-class childhood speaking about giving away money, I wouldn’t be surprised if there are moments when some people in the audience feel uncomfortable. It is my hope that the discomfort is not due to any personal harm caused by my words, but is instead the GOOD kind of discomfort, like the discomfort of entertaining new and different thoughts.

I am very open to positive or constructive feedback and would love to hear from you. The best way to reach me is by email. And if you have any questions about giving plans, Resource Generation or Social Justice Fund Northwest, hit me up!

written

Setting Goals

Image: Cognology

SMART goals

Today my new manager told me he believes measurable goals are overrated.

Hm.

Ever since I started working as a software engineer, I have been fearful of falling behind. Of not knowing enough. Of not being intelligent enough. I set tons of goals for myself as soon as I started my job, hoping to power through the uncertainty and the fear. At first these goals were hugely unhelpful. Not only were my lists of goals paralyzingly long – multiple pages in a Google doc – but they demanded time-consuming data-tracking just to know whether or not I was succeeding.

When I worked for Causes.com organizing student organizers, I learned about SMART goals – goals that are Specific, Measurable, Achievable, Realistic, and Timely. I thought SMART goals sounded so, well, smart. Having recently come from college organizing myself, I thought that a good campus campaign should be concrete, have an end date in sight, and convey a clear story about results – all things that SMART goals spoke to.

Later, as a software engineer, I assumed that my personal goals should also be SMART. I spent a lot of time trying to think of ways to measure my knowledge and learning. By calculating sheer time spent? By counting the number of bonus study sessions per week? By recording a thing I learned every day? By counting numbers of pull requests, pairing sessions, projects completed on my own? I asked my most goal-oriented coworkers to help me craft my goals and strategies, I tried to institute new practices, I kept a daily log of things I’d learned or accomplished.

But in the end, my goals didn’t act as the solid guideposts I was hoping for. They were worthwhile, for sure, because they forced me to clarify my challenges and intentions. But the act of setting those goals didn’t get me much closer to the knowledge and skills – or the feelings of safety and progress – that I was after.

Measurable goals are overrated

At first, when my manager told me that measurable goals are overrated, I felt resistant. Goals should be measurable, I thought to myself, dismayed. How else can you know that you’re meeting them? Especially considering the goals he was suggesting for me:

  • acquiring mastery of UI development (by pairing closely with my new coworkers)
  • developing leadership skills (via project ownership)

Mastery was an exciting, thrilling word to read in the context of goals. But it made me feel nervous too. How would I know if I was on track for something as giant-sounding as “mastery”?

When I asked that question, my manager said that forcing nuanced goals to meet requirements for objective measurability squashes out the richness of the overall goal. “Mastery of UI development” is a very rich goal – it has many aspects to it, none of which are easily measurable. In fact, it would require a good amount of mental contortion to find opportunities for objectivity.

Hearing his arguments against measurability, I started to feel a sense of relief. This way, I wouldn’t set myself up for another failure on improperly-calibrated, falsely-measurable goals. I wouldn’t have to add a bunch of tasks to my daily to-do list just to try and track progress. I wouldn’t be handing myself yet another yardstick with which to measure myself and find myself lacking.

The part that finally sold me: my manager asserted that the point of having these goals is to feel excited about coming to work each day. Thinking about UI mastery (as unmeasurable as it is) is already doing the trick.

Relational goals

An interesting aspect of my manager’s goals philosophy is that I can meet my goals simply by going about my regular business: by collaborating on code each day with good conscience and good intentions, and by talking with my manager each week in our regular check-in to resolve challenges as they come up.

This aspect reminded me of a term I’ve started hearing recently in activist circles, something called relational organizing. In relational organizing, as I understand it, human-to-human relationships are the basis upon which community activism and power is built.

Transferring this idea to my goals at work, the relationship is the building block. Not numbers of facts learned in a day, not PR counts. My goals will be met via relationships bewteen me and my coworkers and between myself and my manager. Goals will be achieved not by the individual studying solo and on the side, but by the individual being incorporated into the flow of others' work.

I think I’m going to really enjoy coming to work each day with this in mind.


P.S. Apologies to SONG if I’m misusing the concept of relational organizing!

written

The Legend of Cassandra

Image: The Guardian

According to Greek legend, Cassandra was a prophet. She foresaw terrible, tragic futures — but as if that wasn’t enough, she was also cursed to never be believed.

I had always wondered why on earth Apache Cassandra was named after a prophet who was never believed. Doesn’t give you the greatest confidence in their software, no?

But yesterday, I learned a ton about Cassandra from a knowledgeable coworker — and now, the reference to Greek legend makes sense.

What is Cassandra?

Cassandra is a distributed data management system built for handling large volumes of I/O with no single point of failure. With Cassandra, your database is just a file system, with files spread out over a number of nodes that are arranged in a cluster.

Image: PlanetCassandra.org

How is it different from MySQL?

Relational databases like MySQL and Postgres use a binary tree structure to store information. Binary trees are great for reading information quickly — they’re a data structure that accommodates binary search, dividing predictable/sortable information in half and search the new half for the desired row. However, adding new rows to a binary tree is kind of ugly. You add the new information as randomly-inserted leaf nodes in the tree and now your binary tree is out of order.

Cassandra solves this problem by getting rid of the binary tree and replacing it with a token ring. In a token ring, you start with a key (like “start time” or “id”) and generate a unique hashed key, an unsigned Long with a value somewhere between -264 and 263. The ring is divided into chunks, each of which have a node assigned to it. For example, you might have a chunk of hashed keys assigned to Node 1, another chunk to Node 3, another chunk to Node 2, another chunk to Node 1, another chunk to Node 3, etc. The chunks are distributed in a somewhat random way to try to fairly balance the load — you wouldn’t want a bunch of traffic going to Node 1 and overwhelming it with requests.

But, just like the prophet from Greek legend, never believe Cassandra.

Can’t trust Cassandra

Any one node by itself can’t be trusted. In fact, the token ring doesn’t just assign one node to each chunk of hash keys, but a list of nodes in the order in which to try to write the data. For example: chunk 256 => [2,3,1]. This will try to write to Node 2 first, then 3, then 1. The data will be stored on multiple nodes eventually.

Every cluster has a replication factor, a value that determines how many nodes to repeat the data on. For example, if our 3-node cluster has a replication factor (RF) of 3, then data will be stored on all 3 of the nodes. My coworker strongly believes that 3 is the minimum sensible RF — because if any of your nodes ever goes down, then you’d rely on a backup node, and you’d need 1 more node to double-check to verify the data is good. (3 nodes - 1 node = 2 nodes. 1 node for requesting the data, 1 to double-check that the first one is right.)

Image: HakkaLabs

Scalable system

One thing that’s amazing about Cassandra is how easy it is to add a new node and therefore add capacity to a cluster. You add the provisioned server to the cluster by simply starting Cassandra on the server (it’s a Java app), then adding its IP address to the list of known nodes. The Cassandra node gets the schema information from the “seed” nodes (the seed nodes are just the nodes, listed by IP address, found in the Cassandra config under “seeds”), the token ring breaks and re-forms to include the new node, and the new node begins streaming data from older nodes. Once it’s joined the cluster and owns all the information it’s expected to own, then it’s ready to receive traffic, and you’ve just added a terabyte of available storage space to your database.

There are no masters, there are no replicas. Just nodes.

It’s a cool system. The co-op nerd side of me loves the hierarchy-free model based on negotiation, shared ownership, and consent among nodes.

However, there are some serious drawbacks to using Cassandra.

What’s wrong with Cassandra?

The biggest downside of Cassandra that has come up so far in conversation: it was designed to solve the problem of slow writes. When MySQL or Postgres write something new to a binary tree, that information gets spread out over the binary tree – it’s not all in one place. Cassandra, on the other hand, writes new information all together. This is convenient when your model of storage is a spinning hard disk drive, where you’d want all the writes to be together.

But with the advent of solid-state drives, this problem might feel a little irrelevant. And given that Cassandra requires a lot of specialized knowledge outside the comfort zone of people who understand SQL really well, there might not be enough of an incentive to make the switch – or to even stop using Cassandra after having started.


Sources:

written

Going Deeper With DNS

Sometimes you just need to say things out loud to someone else to know that you understand something. That’s what I did with my coworker the other day – just described out loud to him how I thought our internal service worked. It really helped. I got to put my vague thoughts into words, and he offered corrections as needed.

Turns out that, like most things, our service works because of the magic that is the Internet. HTTP requests, DNS lookup, IP addresses, CORS, etc, are all at the core of how it functions. Trying to explain how the service worked reminded me of the code interview prep question I practiced when I was trying to get my first software job: “Explain how the Internet works at a high level.”

The /etc/hosts hack

DNS (Domain Name System) lookup is hierarchical. When you make a request for a domain like google.com, the request will travel up through a series of DNS servers until it finds an entry for google.com that points to a specific IP address.

From your laptop, the very first place your computer looks in that hierarchy chain is a file called /etc/hosts. It contains a list of domain names and IP addresses, just like any other DNS server. And so, if you put an entry like this:

1
127.0.0.1   www.google.com

then from now on, when you try to load google.com in your browser (unless your browser has a cached version), you will actually be directed to 127.0.0.1 – your own machine.

This is useful if you want to simulate, say, hitting an internal service that proxies your request to another app. Put the IP address of the internal service with the name of your app in your /etc/hosts file, and your computer will map the domain of the app to the real live internal service, located at that IP address, that receives the request and proxies it elsewhere.

1
2
# the internal service IP        the app
55.5.555.55                      mycoolapp.mydomain.com

Who controls DNS?

So besides just putting it in your /etc/hosts file, how do IP addresses end up with registered domain names?

when you buy a domain (like from Namecheap or Godaddy), those vendors work with IANA to add your DNS entry – that’s the department of ICANN that controls IP/DNS stuff. Yes, deep down in there, there’s a bureaucracy (no offense, IANA) sitting inside the Internet, pulling the strings.

Companies may have their own DNS servers, too. Internal apps and services that don’t need to be accessed by the public Internet can have IP addresses that don’t need to be registered through IANA.

Domain Name VS. Host VS. IP

1
2
55.5.555.55     mydata.com        # domain name
55.5.555.55     wwww.mydata.com   # sub-domain

It used to be the case that subdomains usually had their own unique IP address. Then we learned about Reduce, Reuse, Recycle. Now, it’s common for any one IP address to have many subdomains. So how do we know where exactly to send the request?

Every HTTP request comes with a Host header. The Host header identifies the location – the actual host machine – to where we’re sending packets. The host is the domain name of the server (as well as the port if a nonstandard port is being used). Getting close to the metal here!

Knowing the hostname and port, we can now send the request to the correct host at the IP address we looked up.

In Summary: A Midsummer Night’s Dream

Image: John William Waterhouse - Art Renewal Center – description, Public Domain, https://commons.wikimedia.org/w/index.php?curid=39913701

Let’s say you’re making an internal service who receives requests from one lover, inserts some headers into the request, and proxies the request to its ultimate destination – then hands back the response. A go-between romantic messenger service.

Let’s pull in some Shakespeare: your service is Wall, and your lovers are Pyramus and Thisbe.

Here is the information you need:

1
2
3
4
5
6
7
Wall: wall.service.com
Pyramus: pyramus.mylove.org:8080
Thisbe: thisbe.mydove.org:5000

1.1.1.1 wall.service.com
99.9.9.999 pyramus.mylove.org
88.8.8.888 thisbe.mydove.org

Let’s put Pyramus behind the Wall. We’ll need a DNS entry that looks like this:

1
1.1.1.1 pyramus.mylove.org

Now, Thisbe sends an HTTP request to Pyramus with the domain name wall.service.com and the host header pyramus.mylove.org:8080.

1
curl 'wall.service.com' -H 'pyramus.mylove.org:8080'

The Wall receives the request (because it’s at 1.1.1.1, the IP address that matches the domain), sees the host header, and passes the request on to Pyramus. Success! (Of course, there will be a response back to Thisbe too, but it’s too saccharine to print here.)

Exeunt.

written

Hello World

Almost a year and a half ago, on March 2, 2015, I started work as a software engineer. I was new to Portland, new to programming, and frankly, new to having any kind of foreseeable career path.

The first year was a rush: trying to learn as quickly as possible, trying to do well, trying not to fall behind. I set lots of goals for myself. Since I was working on a public REST API, my manager suggested that I learn as much about APIs as possible. Not only did I take that advice, but I started to veer my long-term career goals in the direction of becoming an API master. APIs are neat. They are symbolic of the Internet itself, showing how interconnection enables complexity and creativity. Pursuing that domain-specific knowledge, I focused on having a specialty – on being special.

From there it was easy to start playing the career ladder game. Can I get a promotion six months from now? What do I have to do to get there? I focused on promotion as a sign I was doing well, a sign that I was on the right track. I felt hugely validated (and, of course, sure that there was some mistake) when I was promoted to Software Engineer II a year in. I couldn’t wait to jump right in and start wading towards III, then IV, then senior. I joined a small group of women specifically working on “leveling up” and kept adding to my list of goals.

Over the last several months, though, I got a wake-up call. Part of the wake-up call was that I changed jobs. The API team (me) was merged into another team, one that owned a bunch of internal services. I stopped driving towards API knowledge and Ruby/Grape expertise and scrambled to pick up Java. Now that I was no longer working full-time on a REST API, my goals around API mastery loosened their grip. When I was asked where I was trying to go long-term, I found that my desire to climb to the level of senior engineer as quickly as possible had also disappeared. Instead, I started wondering: why am I here? What motivates me? And I remembered promises I’d made to myself back when I first embarked on the software engineering path. I remembered why I chose a tech career and how it fits into my life goals, not just my career plans.

It’s been an interesting process to re-envision my nearer-term goals with respect to these larger life values. I’m surprised by what has taken root so strongly: to really pursue work as a full-stack engineer. To be somebody who makes a point of being a generalist. I’ve been a generalist all my life. Why switch to being a specialist now? The heck with “Jack of all trades, master of none” and the associated stigma. I’ll be Jack. I like variety. Variety is useful.

I’m starting this blog (in earnest this time!) to capture what I learn in the process of becoming a Jack of all trades. If I’m going to gobble up knowledge involving React, Angular, Java, Kafka, Rails, Docker, Jenkins, and so much more, then I’ll need a place to recompile and condense that information – and that’s here.

written