Friday, February 10, 2017

libinput knows about internal and external touchpads

libinput has a couple of features that 'automagically' work on touchpads such as disable-while-typing and the lid switch triggered disabling of touchpads and disabling the touchpad when an external mouse is plugged in [1]. But not all of these features make sense on all touchpads. For example, an Apple Magic Trackpad doesn't need disable-while-typing because unless you have a creative arrangement of input devices [2], the touchpad won't be where your palm is likely to hit it. Likewise, a Logitech T650 connected over a unifying receiver shouldn't get disabled when the laptop lid closes.

For this to work, libinput has some code to figure out whether a touchpad is internal or external. Initially we had some code to detect this but eventually moved this to the ID_INPUT_TOUCHPAD_INTEGRATION property now set by udev's hwdb (systemd 231 and later). Having it in the hwdb makes it quite trivial to override locally where the current rules are insufficient (and until the hwdb is fixed, thanks for filing a bug). We still have the fallback code though in case the tag is missing. On a sufficiently modern distribution, udevadm info /sys/class/input/event4 for your touchpad device node should show something like ID_INPUT_TOUCHPAD_INTEGRATION=internal.

So for any feature that libinput adds for touchpads, we only enable it where it makes sense. That's why your external touchpad doesn't trigger disable-while-typing or the lid switch.

[1] ok, I admit, this is something we should've left to the client, but now we have the feature.
[2] yes, I'm sure there's at least one person out there that uses the touchpad upside down in front of the keyboard and is now angry that libinput doesn't allow arbitrary rotation of the device combined with configurable dwt. I think of you every night I cry myself to sleep.

Wednesday, February 1, 2017

libinput and lid switch events

I merged a patchset from James Ye today to add support for switch events to libinput, specifically: lid switch events. This feature is scheduled for libinput 1.7.

First, what are switches and how are they different so keys? A key's state is transient with a neutral state of "key is up". The state itself is expected to change frequently. Switches don't always have a defined logical neutral state and the state changes only infrequently. This requires different handling in applications and thus libinput exposes a new interface (and capability) for switches.

The interface itself is trivial. A switch event has two properties, the switch type (e.g. "lid") and the switch state (on/off). See the libinput-debug-events source code for a simple code to print the state and type.

In libinput, we generally try to restrict ourselves to the cases we know how to handle. So in the first iteration, we'll support a single switch event: the lid switch. This is the toggle that changes when you close the lid on your laptop.

But libinput uses this internally too: touchpads are disabled automatically whenever the lid is closed. Indeed, this functionally was the main motivation for this patchset. On a number of devices, we get ghost touches when the lid is closed. Even though the touchpad is unreachable by the user interference with the screen still causes events, moving the pointer in unexpected ways and generally being a nuisance. Some trackpoints suffer from the same issue. But now that libinput knows about the lid switch it can transparently disable the touchpad whenever the lid is closed and thus discard the events.

Lid switches on some devices are unreliable. There are some devices where the lid is permanently closed and other devices where the lid can be closed, but we'll never see the open event. So we change behaviour based on a few factors. After all, no-one likes a dysfunctional touchpad because the lid switch is broken (if you do, seek help). For one, whenever we detect keyboard events while in logically closed state we'll assume that the lid is open after all and adjust state accordingly. Unless the lid switch is reliable, we don't sync the initial state. That's annoying for those who start libinput in closed mode, but it filters out all devices that set the lid switch to "on" and then never change again. On the Surface 3 devices we go even further: we know those devices needs a bit of hand-holding. So whenever we detect activity on the keyboard, we also write the EV_SW/SW_LID state to the device node, thus updating the kernel to be correct again (and thus help everyone else who may be listening).

The exact behaviours will likely change slightly over time as we have to deal with corner-cases one-by-one. But meanwhile, it's even easier for compositors to listen to switch events and users don't have to deal with ghost touches anymore. Many thanks to James Ye for implementing this.

Monday, January 30, 2017

How libinput opens device nodes

In order to read events and modify devices, libinput needs a file descriptor to the /dev/input/event node. But those files are only accessible by the root user. If libinput were to open these directly, we would force any process that uses libinput to have sufficient privileges to open those files. But these days everyone tries to reduce a processes privileges wherever possible, so libinput simply delegates opening and closing the file descriptors to the caller.

The functions to create a libinput context take a parameter of type struct libinput_interface. This is an non-opaque struct with two function pointers: "open_restricted" and "close_restricted". Whenever libinput needs to open or close a file, it calls the respective function. For open_restricted() libinput expects the caller to return an fd with the given flags.

In the simplest case, a caller can merely call open() and close(). This is what the debugging tools do (and the test suite). But obviously this means you have to run those as root. The main wayland compositors (weston, mutter, kwin, ...) instead forward the request to systemd-logind. That then opens the event node and returns the fd which is passed to libinput. And voila, the compositors don't need to run as root, libinput doesn't have to know how the fd is opened and everybody wins. Plus, logind will mute the fd on VT-switch, so we can't leak keyboard events.

In the X.org case it's a combination of the two. When the server runs with systemd-logind enabled, it will open the fd before the driver initialises the device. During the init stage, libinput asks the xf86-input-libinput driver to open the device node. The driver forwards the request to the server which simply returns the already-open fd. When the server runs without systemd-logind, the server opens the file normally with a standard open() call.

So in summary: you can easily run libinput without systemd-logind but you'll have to figure out how to get the required privileges to open device nodes. For anything more than a test or debugging program, I recommend using systemd-logind.

Thursday, January 26, 2017

libinput and wheel tilt events

We're in the middle of the 1.7 development cycle and one of the features merged already is support for "wheel tilt", i.e. support for devices that don't have a separate horizontal wheel but instead rely on a tilt motion for horizontal event. Now, the way this is handled in the kernel is that the events are sent via REL_WHEEL (or REL_DIAL) so we don't actually need special code in libinput to handle tilt. But libinput tries to to make sense of input devices so the upper layers have a reliable base to build on - and that's why we need tilt-wheels to be handled.

For 'pointer axis' events (i.e. scroll events) libinput provides scroll sources. These specify how the scroll event was generated, allowing a caller to handle things accordingly. A finger-based scroll for example can trigger kinetic scrolling while a mouse wheel would not usually do so. The value for a pointer axis is also dependent on the scroll source - for continuous/finger based scrolling the value is in pixels. For a mouse wheel, the value is in degrees. This obviously doesn't work for a tilt event because degrees don't make sense in this context. So the new axis source is just that, an indicator that the event was caused by a wheel tilt rather than a rotation. Its value matches the default wheel rotation (i.e. 15 degrees) just to make use of it easier.

Of course, a device won't tell us whether it provides a proper wheel or just tilt. So we need a hwdb property and I've added that to systemd's repo. To make this work, set the MOUSE_WHEEL_TILT_HORIZONTAL and/or MOUSE_WHEEL_TILT_VERTICAL property on your hardware and you're off. Yay.

Patches for the wayland protocol have been merged as well, so this is/will be available to wayland clients.

Tuesday, January 3, 2017

The definitive guide to synclient

This post describes the synclient tool, part of the xf86-input-synaptics package. It does not describe the various options, that's what the synclient(1) and synaptics(4) man pages are for. This post describes what synclient is, where it came from and how it works on a high level. Think of it as a anti-bus-factor post.

Maintenance status

The most important thing first: synclient is part of the synaptics X.Org driver which is in maintenance mode, and superseded by libinput and the xf86-input-libinput driver. In general, you should not be using synaptics anymore anyway, switch to libinput instead (and report bugs where the behaviour is not correct). It is unlikely that significant additional features will be added to synclient or synaptics and bugfixes are rare too.

The interface

synclient's interface is extremely simple: it's a list of key/value pairs that would all be set at the same time. For example, the following command sets two options, TapButton1 and TapButton2:

synclient TapButton1=1 TapButton2=2
The -l switch lists the current values in one big list:
$ synclient -l
Parameter settings:
    LeftEdge                = 1310
    RightEdge               = 4826
    TopEdge                 = 2220
    BottomEdge              = 4636
    FingerLow               = 25
    FingerHigh              = 30
    MaxTapTime              = 180
    ...
The commandline interface is effectively a mapping of the various xorg.conf options. As said above, look at the synaptics(4) man page for details to each option.

History

A decade ago, the X server had no capabilities to change driver settings at runtime. Changing a device's configuration required rewriting an xorg.conf file and restarting the server. To avoid this, the synaptics X.Org touchpad driver exposed a shared memory (SHM) segment. Anyone with knowledge of the memory layout (an internal struct) and permission to write to that segment could change driver options at runtime. This is how synclient came to be, it was the tool that knew that memory layout. A synclient command would thus set the correct bits in the SHM segment and the driver would use the newly updated options. For obvious reasons, synclient and synaptics had to be the same version to work.

Atoms are 32-bit unsigned integers and created for each property name at runtime. They represent a unique string (the property name) and can be created by applications too. Property name to Atom mappings are global. Once any driver initialises a property by its name (e.g. "Synaptics Tap Actions"), that property and the corresponding Atom will exist globally until the server resets. Atoms unknown to a driver are simply ignored.

8 or so years ago, the X server got support for input device properties, a generic key/value store attached to each input device. The keys are the properties, identified by an "Atom" (see box on the side). The values are driver-specific. All drivers make use of this now, being able to change a property at runtime is the result of changing a property that the driver knows of.

synclient was converted to use properties instead of the SHM segment and eventually the SHM support was removed from both synclient and the driver itself. The backend to synclient is thus identical to the one used by the xinput tool or tools used by other drivers (e.g. the xsetwacom tool). synclient's killer feature was that it was the only tool that knew how to configure the driver, these days it's merely a commandline argument to property mapping tool. xinput, GNOME, KDE, they all do the same thing in the backend.

How synclient works

The driver has properties of a specific name, format and value range. For example, the "Synaptics Tap Action" property contains 7 8-bit values, each representing a button mapping for a specific tap action. If you change the fifth value of that property, you change the button mapping for a single-finger tap. Another property "Synaptics Off" is a single 8-bit value with an allowed range of 0, 1 or 2. The properties are described in the synaptics(4) man page. There is no functional difference between this synclient command:

synclient SynapticsOff=1
and this xinput command
xinput set-prop "SynPS/2 Synaptics TouchPad" "Synaptics Off" 1
Both set the same property with the same calls. synclient uses XI 1.x's XChangeDeviceProperty() and xinput uses XI 2.x's XIChangeProperty() if available but that doesn't really matter. They both fetch the property, overwrite the respective value and send it back to the server.

Pitfalls and quirks

synclient is a simple tool. If multiple touchpads are present it will simply pick the first one. This is a common issue for users with a i2c touchpad and will be even more common once the RMI4/SMBus support is in a released kernel. In both cases, the kernel creates the i2c/SMBus device and an additional PS/2 touchpad device that never sends events. So if synclient picks that device, all the settings are changed on a device that doesn't actually send events. This depends on the order the devices were added to the X server and can vary between reboots. You can work around that by disabling or ignoring the PS/2 device.

synclient is a one-shot tool, it does not monitor devices. If a device is added at runtime, the user must run the command to change settings. If a device is disabled and re-enabled (VT-switch, suspend/resume, ...), the user must run synclient to change settings. This is a major reason we recommend against using synclient, the desktop environment should take care of this. synclient will also conflict with the desktop environment in that it isn't aware when something else changes things. If synclient runs before the DE's init scripts (e.g. through xinitrc), its settings may be overwritten by the DE. If it runs later, it overwrites the DE's settings.

synclient exclusively supports synaptics driver properties. It cannot change any other driver's properties and it cannot change the properties created by the X server on each device. That's another reason we recommend against it, because you have to mix multiple tools to configure all devices instead of using e.g. the xinput tool for all property changes. Or, as above, letting the desktop environment take care of it.

The interface of synclient is IMO not significantly more obvious than setting the input properties directly. One has to look up what TapButton1 does anyway, so looking up how to set the property with the more generic xinput is the same amount of effort. A wrong value won't give the user anything more useful than the equivalent of a "this didn't work".

TL;DR

If you're TL;DR'ing an article labelled "the definitive guide to" you're kinda missing the point...

Tuesday, December 20, 2016

xf86-input-synaptics is not a Synaptics, Inc. driver

This is a common source of confusion: the legacy X.Org driver for touchpads is called xf86-input-synaptics but it is not a driver written by Synaptics, Inc. (the company).

The repository goes back to 2002 and for the first couple of years it Peter Osterlund was the sole contributor. Back then it was called "synaptics" and really was a "synaptics device" driver, i.e. it handled PS/2 protocol requests to initialise Synaptics, Inc. touchpads. Evdev support was added in 2003, punting the initialisation work to the kernel instead. This was the groundwork for a generic touchpad driver. In 2008 the driver was renamed to xf86-input-synaptics and relicensed from GPL to MIT to take it under the X.Org umbrella. I've been involved with it since 2008 and the official maintainer since 2011.

For many years now, the driver has been a generic touchpad driver that handles any device that the Linux kernel can handle. In fact, most bugs attributed to the synaptics driver not finding the touchpad are caused by the kernel not initialising the touchpad correctly. The synaptics driver reads the same evdev events that are also handled by libinput and the xf86-input-evdev driver, any differences in behaviour are driver-specific and not related to the hardware. The driver handles devices from Synaptics, Inc., ALPS, Elantech, Cypress, Apple and even some Wacom touch tablets. We don't care about what touchpad it is as long as the evdev events are sane.

Synaptics, Inc.'s developers are active in kernel development to help get new touchpads up and running. Once the kernel handles them, the xorg drivers and libinput will handle them too. I can't remember any significant contribution by Synaptics, Inc. to the X.org synaptics driver, so they are simply neither to credit nor to blame for the current state of the driver. The top 10 contributors since August 2008 when the first renamed version of xf86-input-synaptics was released are:

     8 Simon Thum
    10 Hans de Goede
    10 Magnus Kessler
    13 Alexandr Shadchin
    15 Christoph Brill
    18 Daniel Stone
    18 Henrik Rydberg
    39 Gaetan Nadon
    50 Chase Douglas
   396 Peter Hutterer
There's a long tail of other contributors but the top ten illustrate that it wasn't Synaptics, Inc. that wrote the driver. Any complaints about Synaptics, Inc. not maintaining/writing/fixing the driver are missing the point, because this driver was never a Synaptics, Inc. driver. That's not a criticism of Synaptics, Inc. btw, that's just how things are. We should have renamed the driver to just xf86-input-touchpad back in 2008 but that ship has sailed now. And synaptics is about to be superseded by libinput anyway, so it's simply not worth the effort now.

The other reason I included the commit count in the above: I'm also the main author of libinput. So "the synaptics developers" and "the libinput developers" are effectively the same person, i.e. me. Keep that in mind when you read random comments on the interwebs, it makes it easier to identify people just talking out of their behind.

Monday, December 19, 2016

libinput touchpad pointer acceleration analysis

A long-standing criticism of libinput is its touchpad acceleration code, oscillating somewhere between "terrible", "this is bad and you should feel bad" and "I can't complain because I keep missing the bloody send button". I finally found the time and some more laptops to sit down and figure out what's going on.

I recorded touch sequences of the following movements:

  • super-slow: a very slow movement as you would do when pixel-precision is required. I recorded this by effectively slowly rolling my finger. This is an unusual but sometimes required interaction.
  • slow: a slow movement as you would do when you need to hit a target several pixels across from a short distance away, e.g. the Firefox tab close button
  • medium: a medium-speed movement though probably closer to the slow side. This would be similar to the movement when you move 5cm across the screen.
  • medium-fast: a medium-to-fast speed movement. This would be similar to the movement when you move 5cm across the screen onto a large target, e.g. when moving between icons in the file manager.
  • fast: a fast movement. This would be similar to the movement when you move between windows some distance apart.
  • flick: a flick movement. This would be similar to the movement when you move to a corner of the screen.
Note that all these are by definition subjective and somewhat dependent on the hardware. Either way, I tried to get something of a reasonable subset.

Next, I ran this through a libinput 1.5.3 augmented with printfs in the pointer acceleration code and a script to post-process that output. Unfortunately, libinput's pointer acceleration internally uses units equivalent to a 1000dpi mouse and that's not something easy to understand. Either way, the numbers themselves don't matter too much for analysis right now and I've now switched everything to mm/s anyway.

A note ahead: the analysis relies on libinput recording an evemu replay. That relies on uinput and event timestamps are subject to a little bit of drift across recordings. Some differences in the before/after of the same recording can likely be blamed on that.

The graph I'll present for each recording is relatively simple, it shows the velocity and the matching factor.The x axis is simply the events in sequence, the y axes are the factor and the velocity (note: two different scales in one graph). And it colours in the bits that see some type of acceleration. Green means "maximum factor applied", yellow means "decelerated". The purple "adaptive" means per-velocity acceleration is applied. Anything that remains white is used as-is (aside from the constant deceleration). This isn't really different to the first graph, it just shows roughly the same data in different colours.

Interesting numbers for the factor are 0.4 and 0.8. We have a constant acceleration of 0.4 on touchpads, i.e. a factor of 0.4 "don't apply acceleration", the latter is "maximum factor". The maximum factor is twice as big as the normal factor, so the pointer moves twice as fast. Anything below 0.4 means we decelerate the pointer, i.e. the pointer moves slower than the finger.

The super-slow movement shows that the factor is, aside from the beginning always below 0.4, i.e. the sequence sees deceleration applied. The takeaway here is that acceleration appears to be doing the right thing, slow motion is decelerated and while there may or may not be some tweaking to do, there is no smoking gun.


Super slow motion is decelerated.

The slow movement shows that the factor is almost always 0.4, aside from a few extremely slow events. This indicates that for the slow speed, the pointer movement maps exactly to the finger movement save for our constant deceleration. As above, there is no indicator that we're doing something seriously wrong.


Slow motion is largely used as-is with a few decelerations.

The medium movement gets interesting. If we look at the factor applied, it changes wildly with the velocity across the whole range between 0.4 and the maximum 0.8. There is a short spike at the beginning where it maxes out but the rest is accelerated on-demand, i.e. different finger speeds will produce different acceleration. This shows the crux of what a lot of users have been complaining about - what is a fairly slow motion still results in an accelerated pointer. And because the acceleration changes with the speed the pointer behaviour is unpredictable.


In medium-speed motion acceleration changes with the speed and even maxes out.

The medium-fast movement shows almost the whole movement maxing out on the maximum acceleration factor, i.e. the pointer moves at twice the speed to the finger. This is a problem because this is roughly the speed you'd use to hit a "mentally preselected" target, i.e. you know exactly where the pointer should end up and you're just intuitively moving it there. If the pointer moves twice as fast, you're going to overshoot and indeed that's what I've observed during the touchpad tap analysis userstudy.


Medium-fast motion easily maxes out on acceleration.

The fast movement shows basically the same thing, almost the whole sequence maxes out on the acceleration factor so the pointer will move twice as far as intuitively guessed.


Fast motion maxes out acceleration.

So does the flick movement, but in that case we want it to go as far as possible and note that the speeds between fast and flick are virtually identical here. I'm not sure if that's me just being equally fast or the touchpad not quite picking up on the short motion.


Flick motion also maxes out acceleration.

Either way, the takeaway is simple: we accelerate too soon and there's a fairly narrow window where we have adaptive acceleration, it's very easy to top out. The simplest fix to get most touchpad movements working well is to increase the current threshold on when acceleration applies. Beyond that it's a bit harder to quantify, but a good idea seems to be to stretch out the acceleration function so that the factor changes at a slower rate as the velocity increases. And up the acceleration factor so we don't top out and we keep going as the finger goes faster. This would be the intuitive expectation since it resembles physics (more or less).

There's a set of patches on the list now that does exactly that. So let's see what the result of this is. Note ahead: I also switched everything from mm/s which causes some numbers to shift slightly.

The super-slow motion is largely unchanged though the velocity scale changes quite a bit. Part of that is that the new code has a different unit which, on my T440s, isn't exactly 1000dpi. So the numbers shift and the result of that is that deceleration applies a bit more often than before.


Super-slow motion largely remains the same.

The slow motions are largely unchanged but more deceleration is now applied. Tbh, I'm not sure if that's an artefact of the evemu replay, the new accel code or the result of the not-quite-1000dpi of my touchpad.


Slow motion largely remains the same.

The medium motion is the first interesting one because that's where we had the first observable issues. In the new code, the motion is almost entirely unaccelerated, i.e. the pointer will move as the finger does. Success!


Medium-speed motion now matches the finger speed.

The same is true of the medium-fast motion. In the recording the first few events were past the new thresholds so some acceleration is applied, the rest of the motion matches finger motion.


Medium-fast motion now matches the finger speed except at the beginning where some acceleration was applied.

The fast and flick motion are largely identical in having the acceleration factor applied to almost the whole motion but the big change is that the factor now goes up to 2.3 for the fast motion and 2.5 for the flick motion, i.e. both movements would go a lot faster than before. In the graphics below you still see the blue area marked as "previously max acceleration factor" though it does not actually max out in either recording now.


Fast motion increases acceleration as speed increases.

Flick motion increases acceleration as speed increases.

In summary, what this means is that the new code accelerates later but when it does accelerate, it goes faster. I tested this on a T440s, a T450p and an Asus VivoBook with an Elantech touchpad (which is almost unusable with current libinput). They don't quite feel the same yet and I'm not happy with the actual acceleration, but for 90% of 'normal' movements the touchpad now behaves very well. So at least we go from "this is terrible" to "this needs tweaking". I'll go check if there's any champagne left.