Getting started


I created Unnode.js because I noticed I kept using the same setup in most of my projects: Node Cluster API, Express and Winston+Rollbar for logging. The purpose of the project is to simply make it easier to fire up new Node.js back ends; a shared, re-usable component that takes care of all the mundane, low-level tasks.

Note that the project is currently in a pre-release stage. As we work towards a 1.0 release, new features will get added. My plan is to improve the web backends abstraction and then add more supported backends, first of which will probably be Fastify.

Even though it's pre-release, I'm personally using Unnode.js in a commercial, mission critical REST API, aswell as to power the back ends of several of our websites, including the one you're visiting right now. So don't let the pre-release tag shy you away of using Unnode.js!

Happy coding! :)

- Riku Nurminen
  Nurminen Development Oy Ltd

Hello, World!

Lets take a look at a minimal Node.js web server written in vanilla ExpressJS:


const express = require('express')
const app = express()
const port = 3000
app.get('/', (req, res) => {
    res.send('Hello World!')
app.listen(port, () => {
    console.log(`Example app listening at http://localhost:${port}`)

Now lets re-create that in Unnode.js:


require('dotenv').config({path: `${__dirname}/.env`})
const unnode = require('unnode')
if(unnode.isMaster) {
    const unnodeMaster = require('unnode').master
    const masterLog = require('unnode').masterLogger
    unnodeMaster.init(__dirname).catch(error => {
            'UnnodeMaster.init() failed', error)
} else if(unnode.isWorker) {
    const unnodeWorker = require('unnode').worker
    const workerLog = require('unnode').workerLogger
    try {
            .catch((error) => {
                    'UnnodeWorker.setupServer() fail',
    } catch (error) {
            'Worker failed to start', error)

Env vars or .env

# Use two CPU cores
# Enable file logging to log/app.log
# Log timezone (autodetected if omitted)
# Server listen host and port


const path = require('path')
module.exports = [
        'vhost': [ '*' ],
        'routes': [
                method: 'GET',
                path: '/',
                controller: 'index_controller#index',
                customParameter: 'someParameter'


const logger  = require('unnode').workerLogger
const unUtils = require('unnode').utils
class IndexController {
    constructor() {
    index(customParameter, req, res) {
        // customParameter == "someParameter"
        const ip     = unUtils.getClientIp(req)
        const method = req.method
        const url    = unUtils.getRequestFullUrl(req)
        const agent  = req.get('user-agent')
            `Request ${method} ${url} (from: ${ip}, `
            + `User-Agent: ${agent})`)
        res.send('Hello World!')
module.exports = new IndexController()

Run it:

node server.js

Your output should look like:


From this we can already see that:

  • We are utilizing process clustering to take advantage of multi-core systems. A single instance of Node.js normally runs only in a single thread. Behind the scenes this uses the Node.js Cluster API.
  • We are getting proper console and file logging with ISO 8601 timestamps, syslog style log levels, and the possibility of logging to Rollbar by setting the ROLLBAR_ACCESS_TOKEN and ROLLBAR_ENVIRONMENT environment variables
  • Since we set up our routes and controllers via the Unnode server config file, we are automatically getting the Helmet middleware on all of our routes. We can also easily configure things such as caching, express views, favicon and robots.txt.
  • Our code is nicely modularized and separated into different files. Easy to maintain!

Next: Configuration