June 5, 2018

VS Code: binding the same keys for next/prev change everywhere

In current VS Code, there are different actions for navigating changes:

  • workbench.action.editor.nextChange for going to next change in an editor
  • workbench.action.compareEditor.nextChange for going to next change in a diff viewer
  • editor.action.dirtydiff.next for viewing some sort of local diff in the editor
Plus of course equivalents for "previous change".

Now, I don't particularly care about the last one with a preview window. However, I want to move in the editor and in the diff viewer with the same keys.

This is not easy to do, because of VS Code's binding resolution order, and the built-in keybinding editor will not allow you to set it up right.

Here's a snippet of keybindings.json that does what I want. Binds Alt+J to "next change" and Alt+K to "previous change", both in diff and in regular editor. The key here is to set up two distinct when conditions.

        "key": "alt+j",
        "command": "workbench.action.editor.nextChange",
        "when": "editorTextFocus"
        "key": "alt+k",
        "command": "workbench.action.editor.previousChange",
        "when": "editorTextFocus"
        "key": "alt+k",
        "command": "workbench.action.compareEditor.previousChange",
        "when": "isInDiffEditor",
        "key": "alt+j",
        "command": "workbench.action.compareEditor.nextChange",
        "when": "isInDiffEditor",

August 17, 2017

scripting the Open Build Service from scratch

(Well, not entirely.)

In my day job I do packaging for openSUSE. My responsibility is taking care of the Python package ecosystem. Right now we are in the middle of one transition (to a different way of packaging Python modules) and starting another one (converting the distro to Python3-by-default). For both of these things I need to modify lots of packages at once.

The bulk of what I do involves the openSUSE Build Service, an instance of OBS that runs our distributions. The Build Service is a thing that allows us to build dozens of variants of each package for each of the supported distributions and architectures. It consists of several independent parts:
  • backend, which schedules and runs the actual build jobs on our server farm
  • API server, which allows us, the users, to control the backend, makes sure that files go where the backend can find them, etc.
  • web interface, which is a clickable client for the API. You can view packages, modify source files, configure build targets and so on
  • osc, the command line tool which is another client for the API.
osc would be the natural starting point for scripting. Unfortunately, osc is also a horrible mess that grew organically alongside the Build Service. It works perfectly fine as a end-user tool, but it's unwieldy for shell-based scripting and difficult to use as a library because it doesn't have a consistent enough internal design.
The main reason for this is that UI code is interwoven with API calls and local non-OBS functionality. Also, osc tries to emulate a version control system and bases its OBS interaction on this model.

There is also osc2, a from-scratch rewrite with the intent to split the UI and logic into separate parts and impose some sort of order on the overall chaos. Unfortunately, it is a typical Generation 2 Project, deeply layered, overly complicated and overly generic. And also not nearly feature-complete, for the obvious reason that it was mostly abandoned before it got anywhere.

We are considering some serious refactoring of osc, and it seems possible to reuse the functionality while fixing the structure. We also want Something Usable Now(tm). Hence my work on a tiny library called "osclib". The idea is to make it a thin wrapper around the API and gradually modify osc the command line client to use osclib where appropriate.
Hard to say if this will ever go anywhere, but osclib is a nice exercise in understanding the OBS API. Also, when scripting things, you often don't need the rich functionality of handling every possible special case and command line switch.

osclib relies on osc for parsing the config file (and extracting login information from it), but does its own HTTP communication through Requests. At the moment, it has one class (to wrap the API server connection), about five functions in total, and can accomplish what I wanted to do in the first place: download a list of every spec file in the Tumbleweed distro, let me modify them offline, then create a branch project for each touched package and upload the modified spec file into it.

The hardest part was not actually writing the code, but reading osc sources and the very sparse OBS API documentation and figuring out what to do. For example, in order to upload a file, you need to create a "commit" through a separate API request.

So the fact that I can write a simple script that performs the mass update I mentioned is a big win :)

You can find osclib on my github. It is currently part of a forked osc repo, because osclib is written in Python 3 and the system install of osc is in Python 2 and at this stage it's too much effort to manage the dependencies properly. So instead it's the neighboring directory and you simply add it to your PYTHONPATH.

April 1, 2013

setting up USB printer in Android/Linux chroot

I wanted to turn my Android tablet (an Asus Transformer TF101) into a little ad-hoc print server. Unfortunately, android apps for such purpose are either expensive, sucky or nonexistent (usually all three). So, I thought, since i have an openSUSE chroot up and running on the device, i can just install CUPS server and print happily through that.

This was not without trouble, because after installing it for the first time, it came up nicely, but refused to see the USB printer.

Here's how to fix it:

Step 1: Get a CUPS server that uses libusb. There is a known incompatibility between kernel support for USB printers and CUPS's userspace support - and since SUSE has kernel support turned on, userspace support is turned off. Well, it just so happens that Android has no kernel support and needs userspace support.
You can either compile a fixed version of CUPS yourself (just add BuildRequires: libusb-1_0-devel to the spec file), or install from my repository.

Step 2: Give yourself rights to the USB device in question. Look into /dev/bus/usb and make sure that the CUPS user can access the device file (for the simplest solution, chmod 666 /dev/bus/usb/*/* - beware though, this gives every user on the system access to all USB devices, so, you know. Exercise caution.)

Step 3: Start up the CUPS service. service cups start

Step 4: Head over to http://localhost:631/ in tablet's browser, set up your printer, allow remote administration, printer sharing, whatever you choose.

Step 5: Print!

April 3, 2012

fixing timestamps on Google Blogger's threaded comments

as you all probably know, time on comments on (google's) Blogger is broken, because it only shows Pacific timezone. You can't change it from settings, etc. etc... Fear not, for I have developed a cure!

You need to place a piece of javascript into the template. Here's how you do it:

Step 1: click on Design, then Edit HTML.
Step 2: Now, blogger will nag you about what you're doing and that you can break it... yeah, like we don't know. Proceed!!
Step 3: check "Expand Widget Templates"

Step 4: This is the tricky part.
First, using your browser's search function, search for "render = function".
You will find something like this:
      var render = function() {
        if (window.goog && window.goog.comments) {
          var holder = document.getElementById('comment-holder');
          window.goog.comments.render(holder, provider);

You need to change this part, so that it looks like this:
      var render = function() {
        if (window.goog && window.goog.comments) {
          var holder = document.getElementById('comment-holder');
          window.goog.comments.render(holder, provider);

          load = function() {

          // dynamically load any javascript file.
          load.getScript = function(filename) {
             var script = document.createElement('script')
             script.setAttribute("onreadystatechange", "DOMLoaded()")
             script.setAttribute("onload", "DOMLoaded()")
             script.setAttribute("src", filename)
             if (typeof script!="undefined") document.getElementsByTagName("head")[0].appendChild(script)


(script shamelessly stolen from here)

Now, scroll a bit more down, until you see:

// ]]>
between the "})();" and "// ]]>", we place the code that actually does something - fixes the dates and times!

function DOMLoaded(){
     $('.datetime').each(function() { var date = new Date(this.children[0].innerHTML + " PDT"); this.children[0].innerHTML=date.toLocaleString(); } );
// ]]>

Save, and voila! The times you see are now in your timezone.

What actually happened here is that I needed to inject jQuery plugin into the page, so that I could easily select the relevant elements. And the "render = function" seemed a good place, because it's part of some other javascript weirdnesses that make the whole threaded comment nonsense possible. The "DOMLoaded" then does all the work - parses the date as if it were in Pacific Daylight, which it happens to be, and converts it to your local time. Given more time, will and effort, you could customize the format, or use the load.getScript to pull in something like this and do magic with the dates. Or something completely different - now you have the full might of jQuery at your disposal, after all.

November 1, 2011

how to add FindErr search to Google Chrome

As Internet becomes mainstream and stupid people start using it (perhaps "nontechnical" would be better in many contexts, but in this case I believe "stupid" is more accurate), services must cater to the needs of the stupid. That's what happened at Google, who, apparently around 2009, started to search for "what the user meant" instead of what the user actually told them to search.
Now, this might be helpful in many cases, but it just so happens that I'm smart enough to recognize when bad search results are my fault, and if I search for a term, I want the search engine to give me the results for that damn term.

The Plus operator used to do that - if you're looking for "FindErr" and not "finder", just type "+FindErr" and it will give you what you wish. Alas, not any more: because of Google+, plus operator is used to search for people on G+. Now, instead, you have to put quotes around the word. Baaaah.

So anyway, I'm not the only one who is unhappy about this, and some people have already taken action. That's what the FindErr.org search engine is about. You type a sane search string, and it makes it all quotey before passing on to Google. Now if only there was a way to make this search the default in Chrome (or Chromium), my favourite browser.

And there is. Follow these simple steps:
  1. type finderr.org into the address bar, and load the page
  2. right-click the URL
  3. choose "Edit Search engines"
  4. in the list, locate finderr.org and click "Set Default"
There! All done!

February 2, 2011

how to run the whole testsuite in a Python project/module

I can't believe Google doesn't have anything to say about this...

The situation is usual: you have a python module, let's call it bravo, and a set of unittest-based unit tests in bravo.test. Now, there's nothing like "runtest" or whatever to run the test suite. Of course, you could run each of the tests individually, but maybe there's twenty of them and you're lazy, or maybe they don't even contain the magical "if name=main then runtest" spell.

Twisted to the rescue! Just run this:
trial modulename
trial modulename.test

If you don't have Twisted (of which trial is a part), you can use the following snippet:

import glob, unittest, os, imp
suite = unittest.TestSuite()
testloader = unittest.TestLoader()

for test in glob.glob("bravo/tests/test_*.py"):
name = os.path.splitext(os.path.basename(test))[0]
module = imp.load_source(name, test)
tests = testloader.loadTestsFromModule(module)


December 9, 2010

how to fill your disk with random data

When using full-disk encryption, it is useful to prefill the disk in question with (pseudo)random data. This makes it harder to tell how much of the encrypted volume's space is already written to - in other words, how much data you have on the volume.

There are many ways to do it - specialized tools, reading from /dev/urandom (reasonably fast), reading from /dev/random (true randomness, but unless you have a HWRNG, it will take 1000 years to fill a disk). Trouble is, generating pseudorandom data is slow. While your average HDD can write at speeds over 50MB/s, you can only generate randomness at, say, 8MB/s (with one core, that is)

The usual recommended method is this:
dd if=/dev/urandom of=/dev/sda
It will take a very long time, because generating the random numbers is slower than writing them to the disk. The problem is that the kernel is only using one CPU core to generate the /dev/urandom stream - the CPU core on which your process runs.

Now if only there was some kind of a trick to make kernel use all four of my CPU cores...
You could, of course, run four dds and make them write to different areas of your disk - but wait, wouldn't that force the disk to seek back and forth? Wouldn't that be a little stupid? Yeah, I thought so.

That's why i wrote this tiny program called urandread. It will open four (or how many you need) processes to read from /dev/urandom, and then combine their output into a stream that is four times faster.
Then you can do this:
./urandread | dd of=/dev/sda
and you're BLAZING!