Ionică Bizău

Web Developer, Linux geek and Musician

Accessing My Home Computer Remotely

I have a powerful ASUS machine which I use when I'm at home. I guess it was designed for gaming. It's quite useful for any tasks, but I often use it for expensive tasks using more resources (e.g. training neural networks). It got a fast enough wired internet connection, as well! :rocket:

When travelling, I do not have physical access to my home computer. However, I do want to access it sometimes (when I work on projects requiring lots of computations). :airplane:

The solution I ended with is to connect to it via SSH. :lock:

So, using my MacBook, I can simply run ssh -A ionicabizau@<public-ip> -p <port> and land in my home. There are a couple of problems, tho. Here is how I did it! :sparkles:

Port Forwarding

Using ifconfig we can see what IP the laptop got on the network:

$ ifconfig | grep 192
        inet 192.168.2.xxx  netmask 255.255.255.0  broadcast 192.168.2.255

(the xxx can be 100, 101 etc).

I opened 192.168.2.1 in the browser (accessing the router settings) and set up port forwarding on this port range: 2042-5042 and using the IP I got in the previous command.

Then, I changed my laptop SSH server port and made it listen on 4242 (which, indeed, is in the range mentioned above).

Restarted the router and then I did a curl ipinfo.io (which outputs the public IP information) and connected successfully to my machine, from my machine, but using the public IP.

Great! There are a couple of issues, tho!

If the laptop reconnects to the router, it may get a different IP on the network. Also, if the router reconnects to the internet, usually it happens to get a different public IP. :boom:

Same IP on the network

By running ifconfig I found out that the wired connection has the name enp5s0.

Then, I modified the /etc/network/interfaces file like this (following a couple of articles from the internet):

auto lo
iface lo inet loopback

# Set up a static ip on the network
auto enp5s0
iface enp5s0 inet static
address 192.168.2.142
netmask 255.255.255.0
gateway 192.168.2.1
dns-nameservers 8.8.8.8 192.168.1.1

Reconnected to the router, and I noticed 192.168.2.142 in the ifconfig output. Rebooted, and the ip didn't change.

Now, I got back in the router settings and exposed the ports 2042-5042 on 192.168.2.142 to the internet.

But the public IP may change...

I don't have a static IP. If I'm not wrong, one has to pay to the internet provider to get a static IP. I don't care if it's going to change, as long I know what new public IP it got.

I made a small tool which pushes the ip information in a GitHub repository: machine-ip. It uses ipinfo.io to get the public ip information.

I created a GitHub repository storing the ip information of my home machine. This is updated every 10 minutes automatically

IpInfo allows as to make 1000 daily requests for free. That is around 41 requests an hour (1000 / 24 is 41.666...). Therefore the 10-minute update, which is good enough.

Running this in a cron job

I made another script which is executed in a cron job.

echo "Adding the ssh key"
ssh-add /home/testing/.ssh/id_rsa
echo "Changing directory"
cd /johnnysapps/notebook
echo "Getting the ip"
date > last_updated.txt
machine-ip

I ran sudo crontab -e -u testing and created my first cron job I've evern written:

SHELL=/bin/sh
PATH=/home/testing/.nvm/versions/node/v6.7.0/bin:/home/testing/bin:/home/testing/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin

# Min Hour Day Month Weekday Command
*/10  *    *   *     *       /johnnysapps/ip

*/10 tells the system to execute my script every 10 minutes.


So, apparently, it's working! :tada: Using my SSH keys I can connect to my home computer. In case the electricity goes down, I do hope it will not be down for more than 1-2 hours (which is supported by the internal battery).

When the electricity is back, the modem, the router and my laptop are going to connect back to the internet and my cron job is going to push the new IP data in my GitHub account.

Sweet! Now I can :airplane: :rocket:!

Read more »

The Joy of Being a Mentor

Helping our fellow developers—or generally speaking, helping each another—is an important part of society. We all work and struggle on this planet, so offering a helping hand is always welcome. :heart:

It's been more than one year since started teaching people to code. I do it on Codementor—an open marketplace for code instructors. In general, we set up 1:1 live sessions and start talking. :smile: Then after we get everything fixed and finally both of us are happy.

What I really like about Codementor is that it connects a diversity of people around the planet. Each mentee is unique (cultures, lifestyle, accent, beliefs etc.), but they all have one single goal: to learn—and that's one of the facts that make us happy.

While I do teach people to do stuff, I keep an open eye on things I can learn as well. For instance, I learned how to use Firebase by working with one of my favourite mentees! :blush: Thanks! :cake:

Obviously, that means I can now take care of people wanting to learn Firebase as well.


:zap: Here's how I do it! :zap:

I'm blessed to live in a small, peaceful, and friendly village from Romania, far enough from the noise of the cities and still have great people around, and of course, a great internet connection! :earth_africa:

In the morning I enjoy the birds singing (if I wake up early enough, sometimes there are few owls hooting too) and in the evening I listen to the crickets' songs.

:bulb: Tip: If you will ever need help in programming during the summer time, and we talk either in the morning or evening, you will have the opportunity to see the big crowd of cows going or coming back to and from the hill. It happens every sunny day, during the summer time. :cow2:

Even though I'm a remote developer working mostly from home, it's still a lot of hard-work. Rest and relaxation are important as well. I find that taking a few days break from work and hiking up the mountains helps a lot.

Often I take my bike and ride it to my little house between two hills—to be alone, hiding in the mountains for a while, after which I go back. There, I don't have internet nor phone signal. Being in a place where you can be alone with your thoughts for a good while is where great ideas are born. I note each one somewhere, and when I get the chance to implement them, I just do it. :sparkles:

Being in the middle of nature, listening to the flowing water, birds happily chirping all day is definitely something special—it's closer to our roots. I believe that if we want to be productive developers, be a good person to talk to, or just be human, we should take a look at the values that our Creator implanted in us.

Is what we eat and drink important?

There is a strong relation between the food we eat and the way we think. We should have our brains clean and agile when teaching others. That's done by knowing what, when, how, and how much to eat. As mentors, we should know the laws of life and health.

Not all of us work remotely, but I do recommend to get out of the cities and purchase a land in the countryside and start cultivating your own garden. Eat plants, not animals. We were designed to have a vegetarian diet.

There are so many things going on in the cities. Noise, crimes, immorality, pollution, and other unfortunate events. We can avoid all these. Faith, hope, love, happiness can be cultivated way better in the countryside, away from the cities.

Rest is important

While working is definitely important, rest is equally as important—if not, more so. Sleep 8 hours a night (sometimes before midnight) and wake up early in the morning.

:bulb: Tip: Bugs are much easier to fix in the morning than in the evening/night! :joy:

We were designed to work the first six days of the week (Sunday to Friday), and then rest on the seventh day: Saturday. One of the secrets to be productive is to value each moment of our lives. :hourglass_flowing_sand:


Summarizing, I always recommended: leave the cities as soon as possible, stand for good principles, eat and drink healthy stuff, work six days a week and rest on the seventh one, help the people around you and love them. We have no time to waste! :rocket:

PS: Most parts of this post were written near a forest, somewhere in the western part of Romania. :evergreen_tree:

Read more »

How I npm

I write a lot of code every day, publishing my code on GitHub and npm. Each tiny package I create does something, and in most of the cases it's a module which can be used in other applications. :four_leaf_clover:

:thought_balloon: What is npm?

In case you don't know, npm is the default package manager for Node.js. Note that there we can publish modules which are not Node.js specific.

Its name stands for need pray more (or nuclear pizza machine, or npm's pretty magical etc etc).

Whaaat?!

Well, if you are here and think that npm stands for Node Package Manager, I have to tell you that's also correct, but it's not complete. npm is not an acronym, but it's a recursive bacronym for npm is not an acronym. In short, npm is an bacronym, not an acronym like many people believe.

You can refer to npm/npm-expansions for a list of things npm stands for. :sparkles:

At this very moment while writing this post, I currently have 561 packages. You can view them here (npm/~ionicabizau).

:raised_hands: How I do it

I started learning Node.js at the end of the year 2012. I was enjoying using modules by others, but I published my first npm package in August 2013. It was youtube-api–a friendly Node.js wrapper for the YouTube API. :tv:

I liked the npm publishing workflow and I didn't stop there. I created more and more packages.

:rocket: How to create an npm package?

Assuming you have Node.js and npm installed, you have to generate a package.json file then write your code and publish it:

# Generate the package.json file
npm init

# Work on your magic
echo 'module.exports = 42' > index.js

# Publish the stuff
npm publish

But a good package should include good documentation as well as easy to use APIs and easy examples.

:crystal_ball: Automatization and Magic

From publishing new stuff on npm, I found that I was doing a bit of manual work every time. That included:

  1. Creating GitHub repositories
  2. Creating releases on GitHub
  3. Maintaing documentation in sync with the code
  4. Maintaining the same documentation format across the repositories

I started to analyze where I wasted time and created small tools to do the work for me. :construction_worker:

:memo: Blah: Fancy Documentation

I created a tool called blah which, since 2014, takes care of generating the documentation for me:

  • It generates README.md files based on my templates.
  • Handles custom stuff by looking at the blah field in package.json
  • Generates documentation by looking at the JSDoc comments from my code.
  • Bumps the package.json version
  • Creates the CONTRIBUTING.md, LICENSE (takes care of updating the year as well!), .travis.yml and .gitignore files

Pretty cool. Since then I didn't have to manually write Markdown files anymore.

:page_with_curl: Packy: package.json defaults

Every time you npm init a package, you have to write stuff like the author, git repository, license etc.

I created packy which takes static (e.g. author) and dynamic (e.g. git repository url) fields by looking at a configuration file.

After this, I was manually writing the name and description fields and then skipping the others. Then, I ran packy which was autocompleting the author, keywords, git-related stuff and so on.

Since blah can run custom commands for me, I decided to integrate packy in my blah configuration.

:tophat: np-init: Automagically create new packages

Because I was still lazy to create the files manually, I created np-init which creates a minimal npm module:

np-init my-super-package-name 'Some fancy description'

Then I just have to cd my-super-package-name and work on my code directly because everything is there for me (example/index.js, lib/index.js, package.json etc).

:dizzy: babel-it: Babelify the things

Because I started to use ES2015 features in my code and since many people use versions of Node.js which do not support ES2015, I started publishing transpilled versions of my code on npm.

After building babel-it, I replaced the npm publish command with babel-it! That is smart enough to babelify my code, publish it on npm and then rollback to my original code.

:ship: Ship Release

The publishing process was boring again. I decided to create a tool to take care of that: ship-release.

So that being said, I don't even need to take care of babelifying the things because ship-release does that (using babel-it).

After fixing a bug into a package, I simply do:

# Creates and switches on the new-version branch
ship-release branch -m 'Fixed some nasty bug'

# Bump the version
ship-release bump

# Publish
#  - generate docs using Blah
#  - push the changes on GitHub
#  - create and merge the pull request
#  - create a new release in the GitHub repository
#  - transpile the code
#  - publish the new version on npm
ship-release publish -d 'Fixed some nasty bug.'

That's the theory. Let's see all these running in the real world.

Example

Today I'm creating a small module called stream-data which will collect the data emitted by a stream and send it into a callback.

Creating the package

As mentioned above, I'm using np-init to do that:

np-init stream-data 'Collect the stream data and send it into a callback function.'

This created the following directory:

stream-data/
├── (.git)
├── example
│   └── index.js
├── lib
│   └── index.js
└── package.json

2 directories, 3 files

Write a minimal example

When building a house, you have to start with the base. When building an npm package, I like to start with the roof.

Even if the library doesn't do anything yet, I just set up an example the way I'm thinking I want the module to work.

So, I cd stream-data and write this (note that I already had something there created by np-init):

"use strict";

const streamData = require("../lib")
    , fs = require("fs")
    ;

// Create a read stream
let str = fs.createReadStream(`${__dirname}/input.txt`);

// Pass the stream object and a callback function
streamData(str, (err, data) => {
    console.log(err || data);
    // => "Hello World"
});

Write the functionality in the library

np-init generated a JSDoc comment, which initially looked like this:

/**
 * streamData
 * Collect the stream data and send it into a callback function.
 *
 * @name streamData
 * @function
 * @param {Number} a Param descrpition.
 * @param {Number} b Param descrpition.
 * @return {Number} Return description.
 */

I slightly changed the function parameters' names and updated the JSDoc comment:

"use strict";

/**
 * streamData
 * Collect the stream data and send it into a callback function.
 *
 * @name streamData
 * @function
 * @param {Stream} str The stream object.
 * @param {Function} cb The callback function.
 * @returns {Stream} The stream object.
 */
module.exports = function streamData (str, cb) {

    if (cb) {
        let buffer = []
          , error = null
          ;

        str.on("data", chunk => buffer.push(chunk));
        str.on("error", err => error = err);
        str.on("end", () => cb(error, buffer.join(""), buffer));
    }

    return str;
};

Run the example

Well, now we can see if it's really working.

$ node example/
Hello World!

Before publishing it, we need to set up some tests.

Tests

For testing I use tester. To add tester in my project I do tester-init.

$ tester-init
...
$ tree
...
└── test
    └── index.js

After writing a few tests, the module is ready to be published on GitHub and npm.

Publishing

We have to create a GitHub repository. I use ghrepo by @mattdesl (thanks! :cake: :grin:). It's smart enough to create the GitHub repository with the right data (taken from the local git repository url and package.json).

ghrepo --bare --no-open
ship-release bump
ship-release publish -d 'Initial release'

Looking in my directory now, I have a couple of new files (README.md, CONTRIBUTING.md, .gitignore etc).

Now my module is on GitHub and npm ready to be npm installed:

// After doing: `npm install --save stream-data`

const streamData = require("stream-data")
    , fs = require("fs")
    ;

// Create a read stream
let str = fs.createReadStream(`${__dirname}/input.txt`);

// Pass the stream object and a callback function
streamData(str, (err, data) => {
    console.log(err || data);
});

:mortar_board: Things I learned

Do not do manual work. Optimize things as much as you can. Otherwise, you'll waste your precious time. Create your team of bots to help you to optimize things.

Happy npming! :tada:


If you like to support what I do, here is how you can do it.

Read more »