The hledger project's issue tracker is on github. It contains:

  • BUG issues - failures in some part of the hledger project (the main hledger packages, docs, website..)
  • WISH issues - feature proposals, enhancement requests
  • uncategorised issues - we don't know what these are yet
  • pull requests - proposed changes to code and docs

Here are some shortcut urls:

In 2017 we experimented with Github projects, in 2018 with Github milestones. Long ago we collected some wishlist items in a trello board (

Open issues

By topic and type.


In the github issue tracker we use various, some which appear above, to categorise things like:

  • whether an issue is a bug (red) or a wish (pink)
  • which subcomponents (tools, commands, input/output formats) are involved (light blue)
  • which platforms are involved (light purple)
  • whether a bounty has been offered (bright green)
  • what is blocking an issue (grey)
  • bug impact and severity (light orange and light pink, see below)
  • miscellaneous things like security (bright red), regressions (black), release needed (orange)
  • etc.

These labels also get used as prefixes in commit messages, in issue/PR titles, etc.


Some loose conventions:


You might see some old experiments in estimate tracking, where issue titles have a suffix noting estimated and spent time. Basic format: [ESTIMATEDTOTALTASKTIME|TIMESPENTSOFAR]. Examples:

  • [2] two hours estimated, no time spent
  • [..] half an hour estimated (a dot is ~a quarter hour, as in timedot format)
  • [1d] one day estimated (a day is ~4 hours)
  • [1w] one week estimated (a week is ~5 days or ~20 hours)
  • [3|2] three hours estimated, about two hours spent so far
  • [1|1w|2d] first estimate one hour, second estimate one week, about two days spent so far

Estimates are always for the total time cost (not time remaining). Estimates are not usually changed, a new estimate is added instead. Numbers are very approximate, but better than nothing.

Prioritising describes an interesting method of ranking issues by a single "User Pain" metric. What adaptation of this might be useful for the hledger project ?

Here's a simplified version, currently being tested in the hledger issue tracker:

Two labels can be applied to bug reports, each with levels from 1 to 5:


Who may be impacted by this bug ?

  • impact1: Affects almost no one.
  • impact2: Affects packagers or developers.
  • impact3: Affects just a few users.
  • impact4: Affects more than a few users.
  • impact5: Affects most or all users.


To people impacted, how serious is this bug ?

  • severity1: Cleanliness/consistency/developer bug. Only perfectionists care.
  • severity2: Minor to moderate usability/doc bug, reasonably easy to avoid or tolerate.
  • severity3: New user experience or installability bug. A potential user could fail to get started.
  • severity4: Major usability/doc bug, crash, or any regression.
  • severity5: Any loss of user's data, privacy, security, or trust.

User Pain

The bug's User Pain score is Impact * Severity / 25, ranging from 0.04 to 1.

Then, practices like these are possible:

  • All open bugs can be listed in order of User Pain (AKA priority).
  • Developers can check the Pain List daily and fix the highest pain bugs on the list.
  • The team can set easy-to-understand quality bars. For example, they could say “In order to release, we must have no open bugs with more than 15 pain.”
  • If there are no bugs left above the current quality bar, they can work on feature work.
  • If a bug is found that will take more than a week to fix, it can be flagged as a ‘killer’ bug, for special treatment.

Reducing bugs and regressions

Some ideas in 2024-01:

  • Maintain ratio of user-visible bugfixes to new features, eg above 10:1 (a new master merge test, human checked)
  • A release cycle with no new features
  • Alternate bugfix and feature release cycles
  • Set bug count targets
  • Label all issues for impact/severity/user pain; set max user pain targets
  • Gate releases on user pain targets or other bug metrics
  • Document and follow more disciplined bug triage/fixing methods
  • Identify every new bug early as a regression/non-regression
  • Prioritise rapid fixing and releasing for regressions / new bugs
  • Cheaper, more frequent bugfix releases
  • More intentional systematic tests ? Analyse for weak spots ?
  • Property tests ?
  • Internal cleanup, architectural improvements, more type safety ?
  • Custom issue dashboards (HTMX on ?)
  • Public list / QA dashboard
  • Grow a QA team

Older ideas

  • Custodians for particular components/topics ("If you are interested in helping with a particular component for a while, please add yourself as a custodian in the Open Issues table. A custodian's job is to help manage the issues, rally the troops, and drive the open issue count towards zero. The more custodians, the better! By dividing up the work this way, we can scale and make forward progress.")