Ionică Bizău

Web Developer, Linux geek and Musician

What to Do When Your Website is Broken

Our job as developers is to break and fix stuff every day. Sometimes, some of us even do it on production servers. But during emergencies, they just tend to apply a quick fix which may not actually fix anything at all— in fact, it could even make things worse.

On some days, it works! But it's still very dangerous:

website is broken
This is a good example of fixing a bug on production with a happy ending. [source]

These things happen. But when they do, how can we address these issues with confidence and in the best way possible?

In this this article I will show you 7 general steps to fix your website, application, or code when it's broken.

1. Don't panic. Relax.

We are all humans. We make mistakes. When something gets broken, don't contribute to the mess by panicking. Try to settle down first before you do anything with it. Try drinking a glass of water and relax while trying to figure what's going on. Remember, think first before you code!

2. Reproduce the problem. Then reproduce it locally.

Let's suppose the website you're having a problem with is on production and users start complaining about a specific functionality.

Doesn't work is not enough. Ask your users to define what doesn't work means. Is there an error? What are the issue reproduction steps? Is it happening always?

Can you reproduce the bug on the production website? If yes, then try to reproduce it locally (or in your development environment). If you do this, however, you should be prepared to debug it. If you can't reproduce it locally, check out the machine-specific settings/resources. Here are few:

  • Database content: Maybe the bug appears only at a specific document that exists on production, but you don't have it locally.
  • Environment configuration: Maybe there are some environment variable or setting you need to change.
  • Make sure you're in sync with the production server code (including dependencies' versions).

But it's working on my machine!

Well, if you can't reproduce it on production, there is a high chance that it's a browser-specific issue. For example, maybe you used a function that doesn't exist on some old version of Internet Explorer, or maybe you shipped your ES2015 code directly, without transpiling it to ES5. In that case, if it's still unclear what's going on, ask for more information: Where does the issue appear? What operating system and what browser (including their versions) are they using?

If you're using the same OS and browser versions, ask them to clear the browser cache then retry. Also, you should also clear your own browser cache and retry.

Oh, yeah! Typos!

Before starting to reproduce the issue and debugging the actual code, make sure your input data is correct. You can't expect correct output if your input is incorrect (duh?).

Sometimes just by staring a bit at the code/urls/input content, you can easily fix things. As developers, we often waste lot of time debugging stuff which looks correct but turns out to be a typos in the end. And then we are like:

website is broken
After wasting a lot of time trying to fix a typo.

So, pay attention to typos. They are funny to fix, but developers often forget to think that well, maybe it's just a typo.

Read the full article on CodeMentor »

Read more »

The Amazing Tangram Square

I was 7 years old when I first found out what the Tangram Square is. I saw it in a Maths book and I quickly started to have fun with it.:dizzy:

The Tangram is a Chinese geometrical puzzle consisting of a square cut into seven pieces:

  • 2 large right triangles
  • 1 medium right triangle
  • 2 small right triangles
  • 1 square
  • 1 parallelogram

These pieces can be used to create different shapes (animals, objects etc).

Recently I thought that it's not a bad idea to to bring this game on the WEB, so anybody can play it.

And I did it! You can play it here. :sparkles:

Using svg.js and a bit of math I created the pieces. To make them interactive, I set up the mouse-related events (click, drag etc):

  • Left click: rotate left
  • Right click: rotate right
  • CTRL + left click: flip
  • Hold left click + move: drag

You can build pretty much anything. From hearts... :sparkling_heart:

...to cats :heart_eyes_cat:...

...and a lot of other shapes. Then, once you got it, be creative and create your own stuff.

The source code is hosted on GitHub. Hope you'll enjoy playing this game! :grin:

Read more »

How to write a web scraper in Node.js

Sometimes we need to collect information from different web pages automagically. Obviously, a human is not needed for that. A smart script can do the job pretty good, especially if it's something repetitive. :dizzy:

scrape-it

When there is no web based API to share the data with our app, and we still want to extract some data from that website, we have to fallback to scraping. :boom:

This means:

  1. We load the page (a GET request is often enough)
  2. Parse the HTML result
  3. Extract the needed data

In Node.js, all these three steps are quite easy because the functionality is already made for us in different modules, by different developers.

Because I often scrape random websites, I created yet another scraper: scrape-ita Node.js scraper for humans. It's designed to be really simple to use and still is quite minimalist. :zap:

Here is how I did it:

1. Load the page

To load the web page, we need to use a library that makes HTTP(s) requests. There are a lot of modules doing that that. Like always I recommend choosing simple/small modules, I wrote a tiny package that does it: tinyreq.

Using this module you can easily get the HTML rendered by the server from a web page:

const request = require("tinyreq");

request("http://ionicabizau.net/", function (err, body) {
    console.log(err || body); // Print out the HTML
});

tinyreq is actually a friendlier wrapper around the native http.request built-in solution.

Once we have a piece of HTML, we need to parse it. How to do that? :thought_balloon:

2. Parsing the HTML

Well, that's a complex thing. Other people did that for us already. I like very much the cheerio module. It provides a jQuery-like interface to interact with a piece of HTML you already have.

const cheerio = require("cheerio");

// Parse the HTML
let $ = cheerio.load("<h2 class="title">Hello world</h2>");

// Take the h2.title element and show the text
console.log($("h2.title").text());
// => Hello world

Because I like to modularize all the things, I created cheerio-req which is basically tinyreq combined with cheerio (basically the previous two steps put together):

const cheerioReq = require("cheerio-req");

cheerioReq("http://ionicabizau.net", (err, $) => {
    console.log($(".header h1").text());
    // => Ionică Bizău
});

Since we already know how to parse the HTML, the next step is to build a nice public interface we can export into a module. :sparkles:

3. Extract the needed data

Putting the previous steps together, we have this (follow the inline comments):

"use strict"

// Import the dependencies
const cheerio = require("cheerio")
    , req = require("tinyreq")
    ;

// Define the scrape function
function scrape(url, data, cb) {
    // 1. Create the request
    req(url, (err, body) => {
        if (err) { return cb(err); }

        // 2. Parse the HTML
        let $ = cheerio.load(body)
          , pageData = {}
          ;

        // 3. Extract the data
        Object.keys(data).forEach(k => {
            pageData[k] = $(data[k]).text();
        });

        // Send the data in the callback
        cb(null, pageData);
    });
}

// Extract some data from my website
scrape("http://ionicabizau.net", {
    // Get the website title (from the top header)
    title: ".header h1"
    // ...and the description
  , description: ".header h2"
}, (err, data) => {
    console.log(err || data);
});

When running this code, we get the following output in the console:

{ title: 'Ionică Bizău',
  description: 'Web Developer,  Linux geek and  Musician' }

Hey! That's my website information, so it's working. We now have a small function that can get the text from any element on the page.


In the module I have written I made it possible to scrape lists of things (e.g. articles, pages etc).

So, basically to get the latest 3 articles on my blog, you can do:

const scrapeIt = require("scrape-it");

// Fetch the articles on the page (list)
scrapeIt("http://ionicabizau.net", {
    listItem: ".article"
  , name: "articles"
  , data: {
        createdAt: {
            selector: ".date"
          , convert: x => new Date(x)
        }
      , title: "a.article-title"
      , tags: {
            selector: ".tags"
          , convert: x => x.split("|").map(c => c.trim()).slice(1)
        }
      , content: {
            selector: ".article-content"
          , how: "html"
        }
    }
}, (err, page) => {
    console.log(err || page);
});
// { articles:
//    [ { createdAt: Mon Mar 14 2016 00:00:00 GMT+0200 (EET),
//        title: 'Pi Day, Raspberry Pi and Command Line',
//        tags: [Object],
//        content: '<p>Everyone knows (or should know)...a" alt=""></p>\n' },
//      { createdAt: Thu Feb 18 2016 00:00:00 GMT+0200 (EET),
//        title: 'How I ported Memory Blocks to modern web',
//        tags: [Object],
//        content: '<p>Playing computer games is a lot of fun. ...' },
//      { createdAt: Mon Nov 02 2015 00:00:00 GMT+0200 (EET),
//        title: 'How to convert JSON to Markdown using json2md',
//        tags: [Object],
//        content: '<p>I love and ...' } ] }

Happy scraping! :grin:

Read more »