Saturday, March 19, 2016

Ubuntu cannot mount /boot/efi

Yesterday all of a sudden Ubuntu (14.04) refused to boot with a strange message that it could not mount /boot/efi.

Running fsck from GRUB (Advanced menu) revealed this error

mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail or so

mountall: mount /boot/efi [771] terminated with status 32
mountall: filesystem could not be mounted: /boot/efi


Opened root shell and tried to find the boot partition
# parted
GNU Parted 2.3
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print devices                                                    
/dev/sda (1000GB)
/dev/sdb (120GB)
(parted) select /dev/sdb
Using /dev/sdb
(parted) print                                                            
Model: ATA Samsung SSD 840 (scsi)
Disk /dev/sdb: 120GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End    Size    File system     Name  Flags
 1      1049kB  538MB  537MB   fat32                 boot
 2      538MB   112GB  111GB   ext4
 3      112GB   120GB  8466MB  linux-swap(v1)

Ok, so the boot partition is #1 on /dev/sdb, i.e. /dev/sdb1
Tried to mount it manually

mount /dev/sdb1 /boot/efi

Same error as above
Ok, let's see the syslog as suggested in the message

dmesg | tail

New error message

FAT-fs (sdb1): IO charset iso8859-1 not found

Searching for this error found this post that suggested it is a problem with the dependency database.
Tried running this as suggested in the post

sudo modprobe nls_iso8859-1

but it complained that it could not load /lib/modules/3.13.0-83-generic/modules.dep.bin. This file existed but it was empty. But the file of the same name was non-empty in an older kernel directory.

So back in GRUB and selected the previous kernel version and Ubuntu booted normally.
Then tried this in an attempt to fix the package database

sudo apt-get check

It complained with some error and suggested the following command

sudo dpkg --configure -a

This finally fixed the issue. Afterwards Ubuntu booted normally with the latest kernel.
Still not clear what caused this glitch. Didn't run any upgrades or so before it.
Luckily it resolved without reinstall.


Tuesday, November 10, 2015

Include mock

Although recently I write mostly in (server side) JavaScript, I still have to return to my old C++ project once in a while. The other day I got notified that one of our integration tests will stop working due to changes in the database layer. It was testing a fix for a crash caused by inconsistent data coming from the db. So this integration test was trying to reproduce the situation by manipulating the data in the db. Actually reproducing such special cases with an integration test might be very difficult and unit tests are usually a better tool for this task. In the end I wanted to test some safety checks in our code, not an end to end scenario.

Still the code under test was a big and hairy lump of C++ code with many hard-wired dependencies. It certainly was not written with testability in mind. While looking around for mocking solutions in C++, I came across the section about testing from Michael Feathers' book Working Effectively with Legacy Code. He describes several approaches to mocking in C/C++, as he calls them seams. I was most impressed by the preprocessing seams. As Michael says, the preprocessor in C/C++ is kind of compensation for its stiffness compared to dynamic languages. It turns out the preprocessing seams are very powerful. You can take a C/C++ source file and compile it in a different environment thus making it do something very different.

So inspired by preprocessing seams I derived my own mocking approach in C/C++ that allows mocking of any function or class (even static, global and non-virtual) without changing the source file where these are called.

Here is the overall structure of a test file:
  1. Disable the original header that defines the dependency to be mocked using include guards
  2. Provide alternative/mock definition of that dependency
  3. #include the source file to be tested
  4. Write the tests
Let's see a simple example.
First, the header that defines the dependency that we want to mock.
Notice that this header uses include guards.

store.h

#ifndef STORE_H
#define STORE_H

class Connection;

class Store
{
public:
    Store(Conection& conn);
    //...
    const char* fetch(const char* query);
    void store(const char* data);
    //...
};

#endif // STORE_H

Next, the code that we want to test.

consumer.cpp
#include <string.h>

#include "store.h"

int measure(Store& store, const char* query)
{
  const char* v = store.fetch(query);
  return strlen(v);
}

And now the test.
We want to mock Store. To do this, we disable the original header by defining its include guard STORE_H. Then we provide our mock implementation. Note the mock does not need to be compatible to the original class. We just need to provide the minimum so that the code under test can compile and execute. So we implement only the methods used during the test.
Then we include the code to be tested consumer.cpp so it will compile in our mocked environment.
Finally, we run our test.

test.cpp
#include <iostream>
#include <cassert>

using namespace std;

#define STORE_H
class Store
{
public:
    const char* fetch(const char* query)
    {
        return "ola";
    }
};

#include "consumer.cpp"

int main(int argc, char *argv[])
{
    Store mock_store;
    assert(measure(mock_store, "query") == 3);
    cout << "OK" << endl;
}

With this approach all the code related to the test is in one place. You don't need to tweak any additional compiler/linker configurations. Also notice that we did not change the original code consumer.cpp, still we changed its behavior by compiling it in a mock environment.

Also described briefly this technique on SO.

Sunday, March 22, 2015

Checking out GitHub pull requests locally

When working with GitHub you often need to checkout a pull request (PR) locally so you can load it in your favorite tools and run/test it.

GitHub help suggests you can use a command similar to:

git fetch origin pull/ID/head && git checkout FETCH_HEAD
(here ID is the number of the pull request)

While this will give you the original code of the PR, it might be different from what you will get if you actually merge the PR. The reason is that you might have parallel changes in your target (master) branch not yet merged in the PR. While you can do the merge also locally, it turns out this is not necessary as GitHub had alsready done it for you. All you need is to use this command instead:

git fetch origin pull/ID/merge && git checkout FETCH_HEAD
(notice the difference in the refspec 'head' vs. 'merge')

This will give you a merged version of the PR which contains all parallel commits in the target branch even those merged after the PR was created.

Sunday, January 11, 2015

Rip audio CDs on LINUX

I still use audio CDs sometimes. But they tend to get lost or damaged easily. So it is a good practice to convert them to MP3.
So far I used Asunder for CD ripping. It is very easy to use ... when it works. But with some discs, usually lower quality CD-R, it just hangs right from the start. So I searched for another tool to rip audio CDs on LINUX.
It turned out you can do this very quickly with two command line tools - cdparanoia & lame. As usual you can quickly install them on Ubuntu with a single command line:

$ sudo apt-get install cdparanoia lame

Assuming the CD is loaded in the CD drive, running this simple command will copy all audio tracks in WAV files in the current directory:

$ cdparanoia -B

Next to convert those WAV files to MP3, run this command:

$ ls -1 | xargs -L 1 lame --preset standard

This will compress the audio files about 10 times using VBR ~190kbps.
If you are satisfied with the result, you can delete all WAV files:

$ rm *.wav

This will leave only MP3 files named like track04.cdda.mp3.
Of course these tools have many more options so you can tweak them as much as you like. For example lame option --ta sets the artist and --tl the album in ID3 tags inside MP3 files.
You can also script and automate this process as you see fit, but these are the tools that do the job nice and quickly.

BTW did you know that "disk" refers to magnetic storage while "disc" refers to optical storage? See Wikipedia.

Wednesday, September 3, 2014

npm - first encounters

Playing with node.js at work I hit an issue in the very beginning as I was unable to install any package using npm (the convenient package manager of node).

$ npm install nodemon
npm ERR! network tunneling socket could not be established, cause=connect EINVAL
npm ERR! network This is most likely not a problem with npm itself
npm ERR! network and is related to network connectivity.
npm ERR! network In most cases you are behind a proxy or have bad network settings.
npm ERR! network 
npm ERR! network If you are behind a proxy, please make sure that the
npm ERR! network 'proxy' config is set properly.  See: 'npm help config'

Ok, I really use a proxy so I run npm config edit and uncomment these lines:

; proxy=proxy:8080
; https-proxy=proxy:8080

Same result

$ npm install nodemon
npm ERR! network tunneling socket could not be established, 

The error EINVAL suggests that connect was called with an invalid argument. What that might be? Let's see the system calls of npm:

$ strace npm install nodemon 1> npm.strace 2>&1
$ grep EINVAL npm.strace
ioctl(9, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7fff5f9a8330) = -1 EINVAL (Invalid argument)
connect(10, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("0.0.31.144")}, 16) = -1 EINVAL (Invalid argument)
connect(10, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("0.0.31.144")}, 16) = -1 EINVAL (Invalid argument)
connect(10, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("0.0.31.144")}, 16) = -1 EINVAL (Invalid argument)
write(2, " tunneling socket could not be e"..., 65 tunneling socket could not be established, cause=connect EINVAL

Ahaa, we see 3 calls to connect with IP 0.0.31.144 and port 443 (default HTTPS port) and all of these returned EINVAL (Invalid argument).
What is this strange IP? Asking Google for it, revealed this post according to which, environment variable http_proxy should be given with a protocol, e.g.

http_proxy=http://proxy:8080

So setting https-proxy=http://proxy:8080 in npm config did solve the problem!

On my Linux both of these are set:

https_proxy=http://proxy:8080
HTTPS_PROXY=proxy:8080

and it seems npm takes the upper case variable to set the default values in npm config.

Wednesday, May 28, 2014

Dynamic auto-complete in Android

Auto-complete text input is very common in modern UI. It makes it easy to select an item from a large list or just provide some hint on matching items. Often the whole list of available items is not available, so you need to lookup matching items in some external source.
In this example we use auto-complete to select a stock, similar to the search field on finance.yahoo.com.

Here we use a web API from Yahoo to lookup matching stocks, but you can use the same approach with any way of populating the auto-complete drop-down dynamically. For example you could search in a database.

These are the major objects involved:
AutoCompleteTextView -> Adapter -> Filter



android.widget.AutoCompleteTextView is the standard Android widget for this purpose. We will use it as it is but will implement custom Adpter and Filter.

 symbolText = new AutoCompleteTextView(getActivity());
 symbolText.setAdapter(new StockLookupAdapter(getActivity()));

Here is our custom Adapter:

public class StockLookupAdapter extends
        ArrayAdapter<StockLookupAdapter.StockInfo> {

    private static final String LOG_TAG = StockLookupAdapter.class
            .getSimpleName();

    class StockInfo {
        public String symbol;
        public String name;
        public String exchange;

        @Override
        public String toString() {
            // text to display in the auto-complete dropdown
            return symbol + " (" + name + ")";
        }
    }

    private final StockLookupFilter filter = new StockLookupFilter();

    public StockLookupAdapter(Context context) {
        super(context, android.R.layout.simple_list_item_1);
    }

    @Override
    public Filter getFilter() {
        return filter;
    }

    private class StockLookupFilter extends Filter {
        ...
    }
}

Here StockInfo carries the data for each item in the drop down. Here we store the properties of each stock like symbol (a.k.a. ticker) and name. We override getFilter to return the custom Filter - StockLookupFilter. This is the essential part.
Here is what android.widget.Filter docu says:

Filtering operations performed by calling filter(CharSequence) or filter(CharSequence, android.widget.Filter.FilterListener) are performed asynchronously. When these methods are called, a filtering request is posted in a request queue and processed later. Any call to one of these methods will cancel any previous non-executed filtering request.

This is exactly what we need as calling a web API usually takes some time so we should not do it in the UI thread. Also the user may type faster than the web API can return the results. This could result in the hints shown in the drop-down lagging considerably behind the current text state. The queuing described above helps avoid this effect.

So here is our custom filter (nested inside StockLookupAdapter):
private class StockLookupFilter extends Filter {

    // Invoked in a worker thread to filter the data according to the
    // constraint.
    @Override
    protected FilterResults performFiltering(CharSequence constraint) {
        FilterResults results = new FilterResults();
        if (constraint != null) {
            ArrayList<StockInfo> list = lookupStock(constraint);
            results.values = list;
            results.count = list.size();
        }
        return results;
    }

    private ArrayList<StockInfo> lookupStock(CharSequence constraint) {
        ...
    }

    // Invoked in the UI thread to publish the filtering results in the user
    // interface.
    @Override
    protected void publishResults(CharSequence constraint,
            FilterResults results) {
        setNotifyOnChange(false);
        clear();
        if (results.count > 0) {
            addAll((ArrayList<StockInfo>) results.values);
            notifyDataSetChanged();
        } else {
            notifyDataSetInvalidated();
        }

    }

    @Override
    public CharSequence convertResultToString(Object resultValue) {
        if (resultValue instanceof StockInfo) {
            // text to set in the text view when an item from the dropdown
            // is selected
            return ((StockInfo) resultValue).symbol;
        }
        return null;
    }

}

perfromFiltering is executed in a background thread and it finds the items to be shown in the drop-down based on the current text in the text field.
publishResults is executed on the UI thread and it is given the FilterResults returned by perfromFiltering. Here we just reset the ArrayAdapter contents and notify the UI to update.
convertResultToString returns the string to be substituted in the text field when a given item from the drop-down is selected. In our case we display both stock symbol and name in the drop-down but want only the symbol in the text field.

So as we can see simple text navigation can be very efficient. Probably this is the reason why it is so popular these days.

P.S.
Still there is one glitch that irritates me. It seems part of the the drop-down is covered by the on-screen keyboard. If I try to close the keyboard, the drop-down is closed first.

Friday, April 4, 2014

Does destruction change anything?

class C{};

// consider this

void foo(const C* p)
{
    delete p;
}

// does it work? should it work?
// after all destroyng the object very much changes it 
// and you are not allowed to change a const object, right?
// ...
// now consider this
// (you can substitute here auto_ptr 
// with your favorite smart pointer)

void foo(auto_ptr<const C> p) { }

// is this possible at all?
// ...
// how about this

void foo(const C x) { }

// hmm... this is pretty common code
// if const objects exist (and we know they do) 
// then they must come to an end somehow

// so it is possible and completely normal to destroy 
// a constant object and all of the above is valid code

// so now my interpretation of const is this:
// if the object still exists, 
// its observable state should be the same as before

Based on this post on SO