Thursday, June 25, 2015

libinput touchpad gestures

One of the bits we are currently finalising in libinput are touchpad gestures. Gestures on a normal touchscreens are left to the compositor and, in extension, to the client applications. Touchpad gestures are notably different though, they are bound to the location of the pointer or the keyboard focus (depending on the context) and they are less context-sensitive. Two fingers moving together on a touchscreen may be two windows being moved at the same time. On a touchpad however this is always a pinch.

Touchpad gestures are a lot more hardware-sensitive than touchscreens where we can just forward the touch points directly. On a touchpad we may have to consider software buttons or just HW-limitations of the touchpad. This prevents the implementation of touchpad gestures in a higher level - only libinput is aware of the location, size, etc. of software buttons.

Hence - touchpad gestures in libinput. The tree is currently sitting here and is being rebased as we go along, but we're expecting to merge this into master soon.

The interface itself is fairly simple: any device that may send gestures will have the LIBINPUT_DEVICE_CAP_GESTURE capability set. This is currently only implemented for touchpads but there is the potential to support this on other devices too. Two gestures are supported: swipe and pinch (+rotate). Both come with a finger count and both follow a Start/Update/End cycle. Gestures have a finger count that remains the same for the gestures, so if you switch from a two-finger pinch to a three-finger pinch you will see one gesture end and the next one start. Note that how to deal with this is up to the caller - it may very well consider this the same gesture semantically.

Swipe gestures have delta coordinates (horizontally and vertically) of the logical center of the gesture, compared to the previous event. A pinch gesture has the delta coordinates too and a delta angle (clockwise, in degrees). A pinch gesture also has the notion of an absolute scale, the Begin event always has a scale of 1.0 and that changes as the fingers move towards each other further apart. A scale of 2.0 means they're now twice as far apart as originally.

Nothing overly exciting really, it's a simple API that provides a couple of basic elements of data. Once integrated into the desktop properly, it should provide for some improved navigation. OS X has had this for a log time now and it's only time we caught up.

Friday, June 5, 2015

libinput and model-specific configurations

libinput provides a number of different out-of-the-box configurations, based on capabilities. For example: middle mouse button emulation is enabled by default if a device only has left and right buttons. On devices with a physical middle button it is available but disabled by default. Likewise, whether tapping is enabled and/or available depends on hardware capabilities. But some requirements cannot be gathered purely by looking at the hardware capabilities.

libinput uses a couple of udev properties, assigned through udev's hwdb, to detect device types. We use the same mechanism to provide us with specific tags to adjust libinput-internal behaviour. The udev properties named LIBINPUT_MODEL_.... tag devices based on a set of udev rules combined with hwdb matches. For example, we tag Chromebooks with LIBINPUT_MODEL_CHROMEBOOK.

Inside libinput, we parse those tags and use them for model-specific configuration. At the time of writing this, we use the chromebook tag to automatically enable clickfinger behaviour on those touchpads (which matches the google defaults on chromebooks). We tag the Lenovo X230 touchpad to give it it's own acceleration method. This touchpad is buggy and the data it sends has a very bad resolution.

In the future these tags will likely expand and encompass more devices that need customised tweaks. But the goal is always that libinput works well out of the box, even if the hardware is quirky. Introducing these tags instead of a sleigh of configuration options has short-term drawbacks: it increases the workload on us maintainers and it may require software updates to get a device to work exactly the way it should. The long-term benefits are maintainability and testability though, as well as us being more aware of what hardware is out there and how it needs to be fixed. Plus the relief of not having to deal with configuration snippets that are years out of date, do all the wrong things but still spread across forums like an STD.

Note: the tags are considered private API and may change at any time, depending what we want or need to do with them. Do not use them for home-made configuration.

Wednesday, June 3, 2015

libinput and the lack of device types

libinput uses udev tags to determine what a device is. This is a significant difference to the X.Org stack which determines how to deal with a device based on an elaborate set of rules, rules grown over time, matured, but with a slight layer of mould on top by now. In evdev's case that is understandable, it stems from a design where you could just point it at a device in your xorg.conf and it'd automagically work, well before we had even input hotplugging in X. What it leads to now though is that the server uses slightly different rules to decide what a device is (to implement MatchIsTouchscreen for example) than evdev does. So you may have, in theory, a device that responds to MatchIsTouchscreen only to set itself up as keyboard.

libinput does away with this in two ways: it punts most of the decisions on what a device is to udev and its ID_INPUT_... properties. A device marked as ID_INPUT_KEYBOARD will initialize a keyboard interface, an ID_INPUT_TOUCHPAD device will initialize a touchpad backend. The obvious advantage of this is that we only have one place where we have generic device type rules. The second advantage is that where this one place isn't precise enough, it's simple to override with custom rule sets. For example, Wacom tablets are hard to categorise just by looking at the device alone. libwacom generates a udev rule containing the VID/PID of each known device with the right ID_INPUT_TABLET etc. properties.

This is a libinput-internal behaviour. Externally, we are a lot more vague. In fact, we don't tell you at all what a device is, other than what events it will send (pointer, keyboard, or touch). We have thought about implementing some sort of device identifier and the conclusion is that we won't implement this as part of libinput's API because it will simply be wrong some of the time. And if it's wrong, it requires the caller to re-implement something on top of it. At which point the caller may as well implement all of it instead. Why do we expect it to be wrong? Because libinput doesn't know the exact context that requires a device to be labelled as a specific type.

Take a keyboard for example. There are a great many devices that send key events. To the client a keyboard may be any device that can get an XKB layout and is used for typing. But to the compositor, a keyboard may be anything that can send a few specific keycodes. A device with nothing but KEY_POWER? That's enough for the compositor to decide to shut down but that device may not otherwise work as a keyboard. libinput can't know this context. But what libinput provides is the API to query information. libinput_device_pointer_has_button() and libinput_device_keyboard_has_key() are the two candidates here to query about a specific set of buttons and/or keys.

Touchpads, trackpoints and mice all look send pointer events and there is no flag that tells you the device type and that is intentional. libinput doesn't have any intrinsic knowledge about what is a touchpad, we take the ID_INPUT_TOUCHPAD tag. At best, we refuse some devices that were clearly mislabelled but we don't init devices as touchpads that aren't labelled as such. Any device type identification would likely be wrong - for example some Wacom tablets are touchpads internally but would be considered tablets in other contexts.

So in summary, callers are encouraged to rely on the udev information and other bits they can pull from the device to group it into the semantically correct device type. libinput_device_get_udev_device() provides a udev handle for a libinput device and all configurable features are query-able (e.g. "does this device support tapping?"). libinput will not provide a device type because it would likely be wrong in the current context anyway.

Tuesday, June 2, 2015

Extended tap-and-drag in libinput

TLDR: as of libinput 0.16 you can end a touchpad tap-and-drag with a final additional tap

libinput originally only supported single-tap and double-tap. With version 0.15 we now support multi-tap, so you can tap repeatedly to get a triple, quadruple, etc. click. This is quite useful in text editors where a triple click highlights a line, four clicks highlight a paragraph, and 28 clicks order a new touchpad from ebay. Multi-tap also works with drag-and drop, so a triple tap followed by a finger down and hold will send three clicks followed by a single click.

We also support continuous tap-and-drag which is something synaptics didn't support provided with the LockedDrags option: Once the user is in dragging mode (x * tap + hold finger down) they can lift the finger and set it down again without the drag being interrupted. This is quite useful when you have to move across the screen, especially on smaller touchpads or for users that prefer a slow acceleration.

Of course, this adds a timeout to the drag release since we need to wait and see whether the finger comes down again. To help accelerate this, we added a tap-to-release feature (contributed by Velimir Lisec): once in drag mode a final tap will release the button immediately. This is something that OS X has supported for years and after a bit of muscle memory retraining it becomes second nature quickly. So the new timeout-free way to tap-and-drag on a touchpad is now:

   tap, finger-down, move, .... move, finger up, tap

Update 03/06/25: add synaptics LockedDrag option reference

Friday, March 6, 2015

Why libinput doesn't support edge scrolling

libinput supports edge scrolling since version 0.7.0. Whoops, how does the post title go with this statement? Well, libinput supports edge scrolling, but only on some devices and chances are your touchpad won't be one of them. Bug 89381 is the reference bug here.

First, what is edge scrolling? As the libinput documentation illustrates, it is scrolling triggered by finger movement within specific regions of the touchpad - the left and bottom edges for vertical and horizontal scrolling, respectively. This is in contrast to two-finger scrolling, triggered by a two-finger movement, anywhere on the touchpad. synaptics had edge scrolling since at least 2002, the earliest commit in the repo. Back then we didn't have multitouch-capable touchpads, these days they're the default and you'd be struggling to find one that doesn't support at least two fingers. But back then edge-scrolling was the default, and touchpads even had the markings for those scroll edges painted on.

libinput adds a whole bunch of features to the touchpad driver, but those features make it hard to support edge scrolling. First, libinput has quite smart software button support. Those buttons are usually on the lowest ~10mm of the touchpad. Depending on finger movement and position libinput will send a right button click, movement will be ignored, etc. You can leave one finger in the button area while using another finger on the touchpad to move the pointer. You can press both left and right areas for a middle click. And so on. On many touchpads the vertical travel/physical resistance is enough to trigger a movement every time you click the button, just by your finger's logical center moving.

libinput also has multi-direction scroll support. Traditionally we only sent one scroll event for vertical/horizontal at a time, even going as far as locking the scroll direction. libinput changes this and only requires a initial threshold to start scrolling, after that the caller will get both horizontal and vertical scroll information. The reason is simple: it's context-dependent when horizontal scrolling should be used, so a global toggle to disable doesn't make sense. And libinput's scroll coordinates are much more fine-grained too, which is particularly useful for natural scrolling where you'd expect the content to move with your fingers.

Finally, libinput has smart palm detection. The large majority of palm touches are along the left and right edges of the touchpad and they're usually indistinguishable from finger presses (same pressure values for example). Without palm detection some laptops are unusable (e.g. the T440 series).

These features interfere heavily with edge scrolling. Software button areas are in the same region as the horizontal scroll area, palm presses are in the same region as the vertical edge scroll area. The lower vertical edge scroll zone overlaps with software buttons - and that's where you would put your finger if you'd want to quickly scroll up in a document (or down, for natural scrolling). To support edge scrolling on those touchpads, we'd need heuristics and timeouts to guess when something is a palm, a software button click, a scroll movement, the start of a scroll movement, etc. The heuristics are unreliable, the timeouts reduce responsiveness in the UI. So our decision was to only provide edge scrolling on touchpads where it is required, i.e. those that cannot support two-finger scrolling, those with physical buttons. All other touchpads provide only two-finger scrolling. And we are focusing on making 2 finger scrolling good enough that you don't need/want to use edge scrolling (pls file bugs for anything broken)

Now, before you get too agitated: if edge scrolling is that important to you, invest the time you would otherwise spend sharpening pitchforks, lighting torches and painting picket signs into developing a model that allows us to do reliable edge scrolling in light of all the above, without breaking software buttons, maintaining palm detection. We'd be happy to consider it.

libinput scroll sources

This feature got merged for libinput 0.8 but I noticed I hadn't blogged about it. So belatedly, here is a short description of scroll sources in libinput.

Scrolling is a fairly simple concept. You move the mouse wheel and the content moves down. Beyond that the details get quite nitty, possibly even gritty. On touchpads, scrolling is emulated through a custom finger movement (e.g. two-finger scrolling). A mouse wheel moves in discrete steps of (usually) 15 degrees, a touchpad's finger movement is continuous (within the device physical resolution). Another scroll method is implemented for the pointing stick: holding the middle button down while moving the stick will generate scroll events. Like touchpad scroll events, these events are continuous. I'll ignore natural scrolling in this post because it just inverts the scroll direction. Kinetic scrolling ("fling scrolling") is a comparatively recent feature: when you lift the finger, the final finger speed determines how long the software will keep emulating scroll events. In synaptics, this is done in the driver and causes all sorts of issues - the driver may keep sending scroll events even while you start typing.

In libinput, there is no kinetic scrolling at all, what we have instead are scroll sources. Currently three sources are defined, wheel, finger and continuous. Wheel is obvious, it provides the physical value in degrees (see this post) and in discrete steps. The "finger" source is more interesting, it is the hint provided by libinput that the scroll event is caused by a finger movement on the device. This means that a) there are no discrete steps and b) libinput guarantees a terminating scroll event when the finger is lifted off the device. This enables the caller to implement kinetic scrolling: simply wait for the terminating event and then calculate the most recent speed. More importantly, because the kinetic scrolling implementation is pushed to the caller (who will push it to the client when the Wayland protocol for this is ready), kinetic scrolling can be implemented on a per-widget basis.

Finally, the third source is "continuous". The only big difference to "finger" is that we can't guarantee that the terminating event is sent, simply because we don't know if it will happen. It depends on the implementation. For the caller this means: if you see a terminating scroll event you can use it as kinetic scroll information, otherwise just treat it normally.

For both the finger and the continuous sources the scroll distance provided by libinput is equivalent to "pixels", i.e. the value that the relative motion of the device would otherwise send. This means the caller can interpret this depending on current context too. Long-term, this should make scrolling a much more precise and pleasant experience than the old X approach of "You've scrolled down by one click".

The API documentation for all this is here: http://wayland.freedesktop.org/libinput/doc/latest/group__event__pointer.html, search for anything with "pointer_axis" in it.

Friday, February 6, 2015

libinput device groups

I just pushed a patchset into libinput to introduce the concept of device groups. This post will explain what they are in this context and why they are needed.

libinput exposes kernel devices as an opaque struct libinput_device. It only recognises evdev devices at this point, this may change in the future if we see a need for it. libinput also exposes a few bits of information about the device such as the name, PID/VID and a handle to the struct udev_device that matches this device. The latter enables callers to get more information from the device. libinput also provides a bunch of configuration settings for each device. Pointer devices get acceleration settings, absolute devices have calibration, etc. For most devices this works just fine.

Some devices like Wacom tablets are represented as multiple event nodes. On a 3.19 kernel you'd get three event nodes for an Intuos 5 touch - the pad (i.e. the tablet itself), a touch node and one node for all the tools (stylus, eraser, etc. multiplexed). libinput exposes each of these nodes as separate device, but that is problematic when applying certain configuration settings. For example, applying a left-handed configuration to the tablet means it's rotated by 180 degrees so we need to rotate the coordinates accordingly. Of course, such a rotation would have to apply to both the touch and the stylus devices but now the caller is left with having to figure out which other devices to set.

The original idea was to present such devices as a single, merged struct libinput_device with multiple capabilities, i.e. a single physical device that can do touch, tablet and pad buttons. A configuration setting like left-handed-ness would then apply to all devices transparently. The API is clean, usage is simple, everybody is happy. Except when they aren't - this doesn't actually work particularly well. First, having such merged devices means we require devices to change at runtime, adding/removing capabilities on-the-fly which puts a burden on the callers to handle this correctly. Second, not all configuration options apply to all subdevices. If the Intuos is used as a touchpad you may want natural scrolling enabled on the touchpad but the wheel on the Wacom mouse should probably still work normally. Third, the subdevices may have different PID/VIDs and certainly have different udev devices. So now libinput needs a way to get to those. In short, a merged device looks nice in theory but the implementation of it would make the libinput API cumbersome to use for little benefit.

The solution to this are device groups: each device in libinput is now part of a struct libinput_device_group. This is just an opaque object that doesn't do anything but sit there but it's enough to identify how devices are grouped together. If two devices return the same device group, they logically belong together. The caller can then decide what to do with it, e.g. loop through all devices of a group to apply a certain configuration setting to all devices. The basic approach is thus:

new_device = libinput_event_get_device(event);
new_group = libinput_device_get_device_group(new_device);
libinput_device_group_ref(new_group);

for each (device, group) in previously_stored_devices {
   if (group == new_group)
      printf("This device shares a group with %s", device);
}
The device groups' lifetime is as you'd expect: it is created for the first device in the group and ceases once the last device in a group is removed. It's not deleted until the last reference was deleted but it won't get recycled. In other words, if you keep unplugging and re-plugging that Intuos tablet, the device group will be new after every plug.

Note that we're intentionally not providing ways to get the devices from a device group, or counting the devices within a group, etc. This avoids race conditions (the view libinput has of the devices isn't the same as the caller has while going through the event queue) but it also makes the API simpler. libinput's callers are mainly compositors which use toolkits with advanced datastructures (glib, Qt, etc.). Using a pointer as key into a hashmap is simpler and less buggy than using whatever hand-crafted hashmap/list implementation we can provide through the libinput API.