Product Backlog

This document contains the product backlog for VisualOre.

Product Vision

Here we define what we consider to be the the vision for the product; what guides us when we think about the product and what can and cannot go into the product backlog.

Vision Statement

The vision for ORE Studio is to build on top of ORE with the aim of providing:

  • a persistent database storage for all of its inputs and outputs;
  • a graphical user interface both for data generation as well as data exploration;
  • the ability to configure and orchestrate ORE execution.

Vision Quotes

People think focus means saying yes to the thing you've got to focus on. But that's not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I'm actually as proud of the things we haven't done as the things I have done. Innovation is saying no to 1,000 things. – Steve Jobs

Release Checklist

Steps to create a new release.

Close previous sprint

To be done on the last Sunday of the sprint.

  1. Make a copy of current sprint backlog and name it current sprint + 1.
  2. Move all untouched stories into product backlog.
  3. Close current sprint: close all open tasks, delete tasks we did not work on, update clocks.
  4. Push commit and wait for builds. This ensures that if there are any failures you can fix them before the release tag.
  5. Tag commit and sign it with key.
  6. Push tag. You can generate new builds overnight.
  7. When tag build is finished, unpack and copy binaries into release, announce twitter and linked in.

Open new sprint

Note: use Claude to do this via the New Sprint Skill.

  1. Open new sprint, updating CMake version, README, GitHub (packages), vcpkg.json. Build all and run tests. This should all be in one commit.
  2. Create a demo. Publish it on youtube.
  3. Write up release notes, publish them in github.

Stories

Near

Stories we may get two in the next two or three sprints.

Remove uses of raw libpq   code

We have added support for the missing features in sqlgen so that we can replace uses of raw libpq. The PRs have been merged in main sqlgen, we just need a release. Do a search for all uses of raw libpq and replace it with appropriate sqlgen calls.

We need to monitor sqlgen releases and then vcpkg updates.

Actually we now have a port with latest.

Merged PRs:

Fine-tune grants for each service account   code

Ideally each service should only be granted access to it's own tables. However, we may also need some common tables like change reasons, etc.

Improvements to detail dialogs   code

We've implemented most of these but check:

  • For most dialogs, pressing escape should be the same as closing it.
  • We should also add a "cancel" button to dialogs, makes it a bit more "normal".
  • New dialogs use toolbars instead of buttons at the bottom.

No samples in sessions   code

For some reason we are no longer sampling for sessions. Investigate why.

Check change reasons on delete   code

Seems like we do not have change reasons on entity deletes. We need to update all entities.

Add party level currency and country restrictions   code

At present a party can use all currencies and all countries. In reality we normally want to restrict this at the party level. We won't be able to do a full implementation of this right now but it is a good idea to put in the framework so that we start to get a feel for how to work with RLS.

This pull request significantly enhances the reference data management system by introducing party-specific visibility for currencies and countries. It establishes a robust framework for granular control over which reference data elements are accessible to different parties, addressing a limitation where all such data was previously tenant-wide. The changes span the entire application stack, from database schema and generated C++ code to service-level filtering, ensuring a comprehensive and well-integrated solution.

Highlights:

  • New Party-Visibility Junction Tables: Introduced oresrefdatapartycurrenciestbl and oresrefdatapartycountriestbl to control which currencies and countries are visible to specific parties, moving from tenant-wide visibility to per-party control.
  • Full Stack Codegen: Generated the complete domain, repository, service, and test layers for both new entities using codegen models, ensuring consistency and reducing manual effort.
  • Service Integration: Integrated filtering logic into existing currencyservice and countryservice to leverage the new junction tables, adding listcurrenciesforparty and listcountriesforparty methods.
  • Bitemporal Support: Implemented bitemporal functionality for the new junction tables, including validfrom and validto columns, versioning, and soft-delete rules.

Add auxiliary types to their entities   code

The toolbars of each entity should have icons for the auxiliary types they use. Also we may need to make the menu options a bit more obvious (e.g. "Purpose Types"?).

Analyse caching at dialog level   code

We seem to have a lot of caches lying around at the dialog level. It makes more sense to have a common core (i.e. non-Qt specific) data-structure that caches different types of data which is shared by the different dialogs. This would also be reused by different clients such as Wt, shell, etc.

Instead of locking we should use immutable data structures. We should also take into account notifications coming in for data changes. It should enable you to notify all clients, load the data in the background in a way that does not affect dialogs.

We could create workspace component which has all of the data needed, stored using immer containers:

  • it can have a comms aspect, so that the UI does not need to worry about any of that. You just request say currencies page N, workspace then deals with that. It can just give you the current version. If none, it will load via comms. It also knows about subscriptions so it will tell you about pending subscriptions for a collection you are interested in. As you load more pages, we load these into immer containers ordered by page.
  • once we implement workspaces, we should then make all references to foreign keys "clickable". For example, if you are in a business centre and it has a country we should be able to click on it and open the country. At present we can't do this because the data is kept at the main dialog level.
  • ideally we want a way to cache data in Qt format. We don't need to save this data to the local database. However, we don't want to make the ores.workspace library depend on Qt. Maybe we just need some kind of "extensions" on each frontend (e.g. Wt etc), sort of a qt.workspace which contains the original workspace plus any Qt specific representations. Or maybe use composition.
  • we should add a UI to visualise the workspace, or at least be able to see size usage etc. Baobab style map.

Notes:

  • actually this is not "workspace", its ores.caching.
  • caching seems more like a component-level phenomena rather than having a dedicated component in the entire product. A component like ores.caching would have to depend on every component anyways, and it would not provide a lot of valuable services. Makes more sense to have a ores.refdata/caching and so on, with possibly a front-end service which deals with cache misses/hits and can persist data locally into say SQLite. It could also handle the look up data locally first, then do a remote get etc. All of this is transparent from a client perspective. We could also have a feature flag to enable/disable caching (in memory, local storage, etc).

Analysis with Gemini:

## User Story: Implement a High-Performance Reactive Caching Service for Reference Data

- **As a** system architect,
- **I want** a centralized Reference Data Service that manages data via a
  multi-tiered caching strategy (L1 Memory, L2 Disk) and incremental updates,
- **So that** my application can access large volumes of lookup data with
  near-zero latency, maintain thread safety without lock contention, and
  minimize network overhead.

---

### Acceptance Criteria

#### 1. Multi-Tiered "Smart" Loading Logic

- **Cold Start:** On initialization, the service must check the **L2 (SQLite)**
  cache for an existing "Page" snapshot.
- **As-Of Load:** If no L2 cache exists, the service must perform a full "As-Of"
  load from the remote connection for a specific timestamp.
- **Since (Delta) Load:** If an L2 snapshot exists, the service must only
  request "Since" updates (changes/deltas) from the remote connection based on
  the last known timestamp in the L2 store.
- **L2 Persistence:** All data fetched via "As-Of" or "Since" must be mapped
  back to its data representation and persisted to **SQLite** to facilitate
  future warm starts.

#### 2. Immutable L1 Cache (immer)

- **Thread Safety:** The L1 (in-memory) cache must use **immer** persistent data
  structures (`immer::map`) to provide lock-free read access for concurrent
  threads.
- **Structural Sharing:** Updates to the cache via "Since" loads must use
  immer’s **transient/persistent** pattern to update only changed entities while
  sharing memory for unchanged data.
- **Atomic Swaps:** The service must provide an atomic mechanism to swap the
  "current" version of the page, ensuring readers always see a consistent
  snapshot.

#### 3. Reactive Subscription & Staleness

- **Notifications:** The service must subscribe to data change notifications.
- **Stale State Management:** Upon receiving a notification, the service must
  mark the relevant collection as stale and trigger an automated "Since" load to
  synchronize the L1 and L2 tiers.

#### 4. Data Mapping

- **Bi-Directional Mapping:** The service must utilize mappers to translate
  between raw data representations (for SQLite storage) and domain entities (for
  L1 memory storage).

---

### Technical Notes

- **L1:** `immer::map<std::string, std::shared_ptr<const Entity>>`.
- **L2:** SQLite table indexed by `page_id` and `timestamp` storing serialized
  blobs.
- **Performance Goal:** Reading from L1 should require no mutex locking, relying
  on the immutability of the underlying immer structure.

Links:

  • GH: immer: "immer is a library of persistent and immutable data structures written in C++. These enable whole new kinds of architectures for interactive and concurrent programs of striking simplicity, correctness, and performance."
  • Cache in-memory in ASP.NET Core: get ideas for the caching interfaces and requirements.

Merged stories:

Add workspace as a container

Core needs to have a container for all of the data stored within a context.

Actually, according to Data Priented Principles, we may not need it. This may be a UI concept but not a code concept.

Notify clients of database connection errors   code

If the server fails to connect to the database, we should show a dialog box or some other UI to explain this to the user. Elsewhere we have:

Erro de Ligação à Base de Dados

Problema de Configuração: Não foi possível estabelecer ligação à base de dados. Por favor contacte o administrador do sistema imediatamente para resolver este problema.

We need something similar. Big and red so user sees it.

Create named profiles   code

Users should be able to move windows around the screen and get them in the right positions and shapes (including detached windows etc) and then save them to the database as a named profile. Then, after login:

  • if there is only one named profile against the user, just load it.
  • if there is more than one, give the user a selector.

Note that the profile may be created against a specific party. Needs some thinking.

Portfolio types   code

At present a portfolio can be virtual or not. We should really generalise this into a portfolio type.

Add missing history dialogs   code

We have an inventory of missing features across entities. Check to see if it covers history as well.

Add virtual portfolio tree support   code

A virtual portfolio is a reporting overlay that aggregates trades from real portfolios and books without owning them directly. Because a book has a single parent_portfolio_id, virtual portfolio membership requires a separate junction table. This story adds the full stack for virtual portfolios:

  • Acceptance Criteria
    • New junction table ores_refdata_virtual_portfolio_books mapping virtual portfolios to books, with the full SQL machinery (triggers, RLS, notify).
    • Domain type, repository, service, and protocol messages for the junction.
    • Virtual portfolios (is_virtual = 1) are shown in the PortfolioBookTreeMdiWindow as a separate tree section below the real tree, using briefcase_20_regular icons.
    • Selecting a virtual portfolio node or any of its book leaves filters the trade table using the junction membership list.
    • Stale indicator responds to changes on the virtual portfolio membership.

Single tenant setup creates tenant   code

In single tenant scenario we should just use the system tenant.

Error with new tenant   code

When trying to setup a new tenant, with an already existing tenant we get:

Publishing catalogue 'Crypto'… ERROR: Query execution failed: ERROR: Atomic bundle publication failed at dataset assets.cryptoicons: duplicate key value violates unique constraint "oresassetsimagestblpkey" CONTEXT: PL/pgSQL function oresdqbundlespublishfn(text,uuid,text,text,boolean,jsonb) line 239 at RAISE

Logs:

2026-02-23 23:55:42.970715 [DEBUG] [ores.dq.service.publication_service] publication_service initialized
2026-02-23 23:55:42.970747 [INFO] [ores.dq.service.publication_service] Publishing bundle: crypto with mode: upsert, atomic: true, published_by: tenant_admin, params_json: (empty)
2026-02-23 23:55:42.970813 [DEBUG] [ores.dq.service.publication_service] Publishing bundle crypto. SQL: SELECT * FROM ores_dq_bundles_publish_fn('crypto', 'c5d8e5f6-7e67-4c18-94ba-55e8829a9d57'::uuid, 'upsert', 'tenant_admin', true, '{}'::jsonb)
2026-02-23 23:55:42.986027 [DEBUG] [ores.dq.service.publication_service] Set tenant context to: c5d8e5f6-7e67-4c18-94ba-55e8829a9d57
2026-02-23 23:55:43.030557 [ERROR] [ores.dq.service.publication_service] Query failed: ERROR:  Atomic bundle publication failed at dataset assets.crypto_icons: duplicate key value violates unique constraint "ores_assets_images_tbl_pkey"
CONTEXT:  PL/pgSQL function ores_dq_bundles_publish_fn(text,uuid,text,text,boolean,jsonb) line 239 at RAISE
2026-02-23 23:55:43.070217 [ERROR] [ores.dq.service.publication_service] Bundle publication failed: Query execution failed: ERROR:  Atomic bundle publication failed at dataset assets.crypto_icons: duplicate key value violates unique constraint "ores_assets_images_tbl_pkey"

Add sub-menus for shell   code

We did the plan for this work.

Notes:

  • log full version including timestamp.
  • add info command which is the same as CLI.
  • move paging commands into a paging menu, e.g. page next instead of next?

Problems with session charts   code

  • session duration is not updating even on refresh.
  • no sample data even after 10 mins.
  • sessions do not seem to take tenant into account.

Add high-watermark support   code

Whenever we request entities, we should also return the watermark for that entity. This is the highest timestamp of all entities within a set (across all pages). This is so we can redo the exact same query with an "as of" and get exactly the same result set. Do some analysis with Claude on how best to do this. Maybe we should always query as of so that we know what the watermark is up front.

Notes:

  • once we have watermark, we need to add a staleness indicator to all dialogs. This will require thresholds. If the data is older than X the staleness indicator goes yellow, older than Y it goes red.

Redesign Qt ImageCache invalidation strategy   code

Note: this story should only be looked at after we move towards workspaces.

The current ImageCache invalidation approach is brittle and error-prone. When datasets are published, the cache reload is triggered by pattern-matching dataset codes (e.g., checking for "flag", "icon", "currenc", "countr" in the code string). This is fragile because:

  • New dataset types require updating the pattern matching logic
  • The relationship between datasets and cached data is implicit
  • Cache invalidation relies on string matching rather than proper metadata

Proposed improvements:

  • Use artefacttype metadata instead of code string matching to determine if cache invalidation is needed
  • Consider a more general cache invalidation framework that can be extended to other caches (ChangeReasonCache, etc.)
  • Investigate using server-side notifications to trigger cache invalidation (similar to the notification system already in place)
  • Document the caching strategy and invalidation rules

Acceptance criteria:

  • Cache invalidation is based on structured metadata, not string patterns
  • Adding new image-related datasets doesn't require code changes
  • Cache invalidation logic is documented and maintainable

Replace parent combo boxes with server-side type-ahead search   code

The entity detail dialogs (party, counterparty) load all entities into a combo box for parent selection using a single request with offset=0, limit=1000. If the system has more than 1,000 entities, the dropdown silently truncates results, meaning users cannot select the correct parent and the hierarchy tree is incomplete. The same pattern exists for country image maps and lookup fetchers.

The fix is to replace the plain QComboBox with a searchable type-ahead widget backed by QCompleter that issues server-side search requests as the user types. This is the scalable solution and avoids loading the full dataset into memory.

Affected locations:

  • CounterpartyDetailOperations.cpp:117 - load_all_entities()
  • PartyDetailOperations.cpp:118 - load_all_entities()
  • EntityDetailDialog.cpp:346 - loadCountryImageMap()
  • LookupFetcher.cpp:80,145 - lookup reference data loading

Note: the MdiWindow list views already have proper server-side pagination via ClientCounterpartyModel / ClientPartyModel; this issue only affects detail dialog combo boxes and related data loading.

  • Introduce a reusable SearchableEntityComboBox widget with QCompleter
  • Add server-side search endpoints for parties and counterparties
  • Replace load_all_entities() calls with on-demand search
  • Consider the same approach for country and lookup fetcher if volumes warrant it

Filter counterparty dataset size for evaluation tenants   code

Evaluation tenants currently import too many counterparties from GLEIF data. Add options to limit the dataset to a configurable percentage or count, providing enough data to evaluate the system without overwhelming the tenant.

Improve tenant deletion safety and cascading   code

Tenant deletion needs proper safeguards and cascading behaviour.

  • Deleting any tenant should show an impact summary before proceeding.
  • Deleting a tenant should trigger cascading deletion of associated parties.
  • Deleting the root tenant should return an error (prevent accidental deletion).

Allow super admin to reset tenant admin accounts   code

The super admin currently cannot reset a tenant admin account's password or credentials. Add this capability to the account management UI.

Add tenant type management dialog   code

There is currently no UI to view or manage tenant types. Add a dialog accessible from the administration menu to list, view, and edit tenant type reference data.

Add account permissions visibility and tabbed dialog   code

Users have no way to see what permissions an account has. The accounts dialog also needs a tabbed layout to display all available information.

  • Add a permissions tab showing effective permissions for the account.
  • Use tabbed dialog layout (General, Permissions, Sessions, etc.).
  • Clicking on a user should show profile details: admin sees full details, others see only key fields (email, etc.).

Add shell commands for parties and counterparties   code

The shell currently has no commands for party or counterparty CRUD. Add list, add, history, and delete commands following the existing shell entity pattern.

Related story: "Add sub-menus to shell".

Suppress staging dataset publication warnings   code

Staging datasets (e.g. gleif.lei_entities.small, gleif.lei_relationships.small) show confusing "No populatefunction for artefacttype" warnings during bundle publication. These datasets are dependencies that load into staging tables and don't publish to production directly — this is working as designed. Suppress or reclassify these log messages to avoid confusion.

Allow creating new connections from login dialog   code

The login dialog should have a button or link to create a new server connection, so users don't have to navigate to the connections dialog separately.

Add validation and ISO compliance flags for reference data   code

Several validation gaps and missing metadata in reference data entities.

  • It is possible to create a country without a name; likely a currency too. Add required field validation.
  • Fictional countries should have X prefixes (following ISO 3166 conventions).
  • Add an "is ISO compliant" flag (or similar) for currencies and countries that are part of the ISO standard vs. custom/internal entries.

Standardise Qt window behaviour and UI consistency   code

Several inconsistencies in window behaviour across the application.

  • Remove maximise button from MDI child windows.
  • Standardise window type: main entity windows should behave the same as history/detail dialogs (consistent minimise/maximise button behaviour).
  • Generate button should only appear in "new" mode, not "edit".
  • Add save button to currency list (needed for generation workflow).
  • Add configuration option to disable quit confirmation dialog.
  • Allow password visibility toggle in password input fields.

Improve session management and server-side housekeeping   code

Session tracking and lifecycle management needs improvement.

  • Session should record whether telemetry is enabled or not.
  • Send and receive byte counters are empty in sessions display.
  • Server should periodically housekeep sessions: mark disconnected sessions as orphan, mark sessions as finished when connection drops.

Clicking on connection in status bar should provide info   code

Or maybe as a tooltip? whatever is idiomatic. Things to show:

  • when never connected: nothing.
  • when connected: server, bytes sent, received, status of connection.
  • when disconnected: retries, disconnected since.

Reduce duplicated Qt code and clean up technical debt   code

Several areas of duplicated code and legacy artefacts that should be cleaned up.

  • Qt network error message processing is duplicated across dialogs; extract into shared utility.
  • Merge common functionality between entities in ores.qt.
  • Raw SQL in image repository should be replaced with a database function.
  • Remove legacy password salt field from accounts.
  • Add ores.analyser to system model documentation.
  • Bootstrap mode flag change does not generate a new version; fix versioning.

Expand repository test coverage and infrastructure   code

Test coverage gaps and infrastructure improvements needed.

  • No repository tests for change reason entity. Audit all entities for coverage.
  • No generators for roles and permissions; add them.
  • Test suites should log version at startup. Info script needs to grep log for version across all suites.

Add version comparison and revert improvements   code

Improvements to entity version history and comparison features.

  • In the history diff dialog, add a "from version" combo box that lets users compare against any previous version (not just the immediately preceding one).
  • Revert-to-version should be a server-side operation taking current version and target version parameters.

Raise events for account profile updates   code

Updating email from "My Account" does not raise an event. Account profile changes should emit events through the event bus for consistency with other entity modifications.

Add batch editing with staging workflow   analysis

Investigate a workflow where users can "locally modify" multiple entities and then save the batch in a single operation. Similar to git's staging concept. Need to determine appropriate terminology and iconography for this feature. Consider: what do we call the staged state? What icon represents "stage for save"?

Evaluate change reasons for role permission tables   analysis

Should role and permission junction tables have change reason support? Analyse whether the audit trail benefit justifies the additional complexity for these administrative tables.

Add HBAC support for books and portfolios   code

Gemini:

In a professional trading system, you are describing Hierarchical Role-Based Access Control (HRBAC). To make this idiomatic and scalable, you need to decouple the "Who" (Users/Groups) from the "What" (Portfolios/Books) using an Access Control List (ACL) that supports Inheritance.

Here is how to structure this to avoid the nightmare of manual user management.

  1. The Domain Model: Groups and Roles

Instead of assigning a user directly to a book, you introduce a User Group (e.g., "North America FX Traders").

  • User Group: A collection of Users.
  • Role: A collection of Permissions (e.g., View, Trade, CloseBook).
  • Access Grant: The link that says Group A has Role X on Portfolio Y.
  • Permission Inheritance (The "Cascading" Rule)

This is the most critical part of your requirement: "Permissioned to a portfolio = permissioned to all children."

The Logic

When a user attempts to access a Book, the system should check for a permission record at:

  • The Book itself.
  • If not found, its parent Portfolio.
  • If not found, its parent Business Unit, and so on up to the Party.

The "Effective Permission" Calculation

In your Qt UI or API, you calculate the Effective Permission.

If I am in the "Senior Managers" group and that group is assigned View access at the Global Equities Portfolio level, I automatically see every book under it.

  • Override Rule: Usually, an explicit "Deny" at a lower level beats an "Allow" from a higher level (though most trading systems stick to additive permissions for simplicity).
  • Idiomatic Table Structure

To support this in your Postgres schema (with your Staging/Master setup), you need a permission table that points to your entity hierarchy.

Column Description
SubjectID The ID of the User Group (e.g., "FXDeskGroup").
ObjectType PARTY, PORTFOLIO, or BOOK.
ObjectID The UUID of the specific entity.
RoleID The Role (e.g., READONLY, FULLTRADER).
IsInheritable Boolean (usually defaults to True).
  1. UI Implementation: The "Access" Tab

In your Organization Manager Qt screen, when you select a Portfolio or Book in the tree, the detail pane should have an "Access Control" tab.

  • Inherited Permissions List: A read-only list showing groups that have access because of a parent node (e.g., "Group: RiskAdmin (Inherited from: The House)").
  • Explicit Permissions List: Where you add/remove groups specifically for this node.
  • Security and "The House" Context

Since your system runs in the context of "The House" (the Tenant), you need a "God Role" (System Admin) that is implicitly permissioned at the Party (Root) level. This ensures you never get locked out of a branch of the tree.

The "Four-Eyes" for Permissions

Because permissions are sensitive, changes to this table should also go through your staging schema.

  • Maker: Proposes adding "Group B" to "Portfolio C".
  • Checker: Approves the grant.
  • Audit: The temporal table tracks exactly when a user gained access to a specific book—crucial for regulatory "Who saw what" inquiries.

Understanding Settlements and SSI   analysis

This is a very complicated topic. We should try to summarise it and see what is the smallest subset that we need in order to see full trade lifecycle.

Links:

Add ISO 20022 Support   code

As per Gemini:

Summary Mapping for your C++ Logic

Internal Event ISO 20022 Message Action
TradeExecuted fxtr.014 Confirm terms with counterparty.
Send Funds pacs.009 Instruct the bank to move cash.
Check Progress pacs.002 Update UI "Settlement Status" to 'In Progress'.
Funds Confirmed camt.054 Update UI to 'Settled'.
EOD Reconcile camt.053 Verify internal ledger vs bank balance.
     

Pro-Tip for your Implementation: Since you're using C++, don't try to build every message at once:

  • Start with pacs.009 (Sending money).
  • Follow with pacs.002 (Handling the response).
  • Add camt.053 (Reconciling at night).

ISO 20022 Asset Class Mapping

If you move beyond FX, you switch "prefixes." Here is how the ISO 20022 world is carved up:

Message Prefix Asset Class / Domain Example Use Case
fxtr Foreign Exchange "Spot, Forwards, NDFs, FX Swaps."
sese Securities Settlement "Settlement of Equities, Bonds, ETNs."
semt Securities Management "Custody statements, holding reports."
setr Securities Trade Trade capture/subscription for Mutual Funds.
auth Regulatory Reporting "OTC Derivatives (IRS, CDS), Transaction reporting."
pacs Payments The cash resulting from any of the above.

FPML vs ISO 20022

For Swaps and other Derivatives, the messaging is split between Trade Reporting and Lifecycle/Confirmation.

  • auth (Authorities): In the modern MX world, Interest Rate Swaps are largely reported using the auth series (specifically under EMIR or MiFIR regulations).
    • auth.030: Derived from the "Trade Reporting" business area, used for Reporting of OTC derivatives.
  • FpML (The "De Facto" standard): As we discussed earlier, while ISO 20022 has auth messages for regulation, FpML remains the industry standard for the actual bilateral confirmation of IRS trades. Most modern systems use FpML for the "contract" and then use pacs for the resulting cash flows.
Feature FpML (Financial products Markup Language) ISO 20022 (MX)
Primary Domain "Derivatives & OTC Trades (Swaps, Options, Forwards)." Payments & Securities Settlement.
Focus "The economics of the trade (strike price, tenors, floating rate indexes)." "The movement of value (sender, receiver, amount, intermediary banks)."
Lifecycle Stage "Front-to-Middle Office: Execution, Affirmation, and Confirmation." "Back Office: Payment clearing, settlement, and reporting."
Complexity Extremely deep. Can describe a 30-year complex interest rate swap. Broad. Designed to move money across any border or system.

Extend login flow with party selection   code

After authentication, look up the user's associated parties from account_parties. If one party, auto-select. If multiple, return the party list to the client for selection. The client displays a party picker widget. After selection, compute the visible party set and bind to the session.

  • Acceptance Criteria
    • Login response includes available parties.
    • Party picker displayed when user has multiple parties.
    • Selected party stored in session and displayed in application window (title bar shows username @ party_name).
    • Party context immutable for session lifetime.
  • Design Notes
    • Protocol changes

      Extend login_response (0x2006) with two fields appended to the wire format:

      • available_parties: vector<{uuid id, string name}> — empty when 0 or 1 party (auto-selected); populated when ≥2 parties.
      • selected_party_id: uuid — non-nil only when the server auto-selected exactly one party; nil when the client must choose.

      Add two new messages:

      • select_party_request (0x2015): {uuid party_id}
      • select_party_response (0x2016): {bool success, string error_message}
    • Server changes (handle_login_request)

      The party resolution block (lines 488-520 of accounts_message_handler.cpp) currently always auto-selects parties.front(). Change to:

      • 0 parties → leave session unbound; no change.
      • 1 party → auto-select as now; set selected_party_id in response.
      • ≥2 parties → do NOT bind party_id yet; return available_parties list; leave session unbound until select_party_request arrives.

      Add handle_select_party_request: validate chosen party_id is in account's account_parties, bind session->party_id, recompute visible_party_ids, update both in-memory session and DB record.

    • Client changes
      • LoginResult gains available_parties and selected_party_id.
      • ClientManager gains current_party_id_, current_party_name_, and a selectParty(uuid) method.
      • LoginDialog::onLoginResult gains a new branch after the password-reset check: if available_parties.size() > 1, show PartyPickerDialog (modal, same pattern as ChangePasswordDialog). On reject → disconnect, re-enable form. On accept → call clientManager_->selectParty(id).
      • Main window title set to username @ party_name after login completes.
    • Session state machine
      Not connected
        → Connected
        → Authenticated  (party-unbound, only when ≥2 parties)
        → Party selected (fully operational)
      
      0 or 1 party: Authenticated → Party selected in one step (no picker shown).
      

Add party role scheme entity   code

The following is the analysis for adding support to party schemes.

Contains a code representing a related party role. This can be extended to provide custom roles.

Code Description

  • Code: Accountant
  • Description: Organization responsible for preparing the accounting for the trade.
  • Code: Allocation
  • Description: Agent The organization responsible for supplying the allocations for a trade to be allocated to multiple accounts/organizations.
  • Code: Arranging
  • Description: Broker The organization that arranged the trade, i.e. brought together the counterparties. Synonyms/Alternatives: Inter-dealer broker, agent.
  • Code: Beneficiary
  • Description: Organization that suffers the economic benefit of the trade. The beneficiary may be distinct from the principal/counterparty - an example occurs when a hedge fund trades via a prime broker; in this case the principal is the prime broker, but the beneficiary is the hedge fund. This can be represented as a payer/receiver account in the name of the hedge fund, but it is also possible to add the party role of "Beneficiary" at the partyTradeInformation level.
  • Code: Booking
  • Description: Party The entity for which the organization supporting the trade's processing has booked/recorded the trade. This is used in non-reporting workflows situations in which the trade doesn't need to be reported but a firm still wants to specify their own side.
  • Code: Buyer
  • Description: Acquirer of the legal title to the financial instrument. In the case of an option, the buyer is the holder of the option. In the case of a swap or forward, the buyer will be determined by industry best practice. This does not refer to an investor or investment manager or other organization on what is typically called the "Buy side"; for that, see the "Client" role. Corresponds to "Buyer" as defined in certain regulations such as ESMA MiFID II/MIFIR RTS 22 field 9.
  • Code: Buyer
  • Description: DecisionMaker The party or person who, having legal authority to act on behalf of the trade counterparty acting as Buyer as defined in this coding scheme, made the decision to acquire the financial instrument. Corresponds to "buyer decision maker" as defined in ESMA's MIFIR RTS 23 report. This does not refer to the decision maker for what is traditionally called the "Buy side"; for that, see the "Client Decision Maker" role.
  • Code: Clearing
  • Description: Client An organization that clears trades through a clearing house, via a clearing broker (member of the clearing house) who acts as an agent on its behalf. The term "client" refers to the organization's role in the clearing process in relation to its clearing broker, and not whether it is a price maker or taker in the execution process.
  • Code: Clearing
  • Description: ExceptionParty A party to the trade that claims a clearing exception, such as an end-user exception under Dodd-Frank Act provisions.
  • Code: Clearing
  • Description: Firm Organization that submits the trade to a clearing house on behalf of the principal. Synonyms/alternates: Futures Commission Merchant (FCM), Clearing Broker, Clearing Member Firm. Some implementations use "Clearing Broker" as synonym.
  • Code: Clearing
  • Description: Organization The organization that acts as a central counterparty to clear a derivatives contract. This is used to represent the role of Central Counterparties (CCPs) or Derivative Clearing Organizations (DCOs). Sometimes called "ClearingService". Some implementations also use the term "Clearer".
  • Code: Client
  • Description: Client as defined under ESMA MIFIR. This is generally the investor or other client of an investment firm, and is synonymous with the Beneficiary in many circumstances.
  • Code: Client
  • Description: DecisionMaker The party or person who, having legal authority to act on behalf of a trade counterparty, made the decision to acquire or sell the financial instrument.
  • Code: Confirmation
  • Description: Platform Organization serving as a financial intermediary for the purposes of electronic confirmation or providing services for post-processing of transactional data.
  • Code: Contractual
  • Description: Party A party to a contractual document. If the intended usage relates to the context of the trade lifecycle, more specific annotations have been defined which might be more appropriate.
  • Code: Counterparty
  • Description: An economic counterparty to the trade. Synonym: principal.
  • Code: Counter
  • Description: PartyAffiliate Organization offiially attached to the counterparty. e.g. partner, branch, subsidiary.
  • Code: Counter
  • Description: PartyUltimateParent The topmost entity or organization, within the corporate hierarchy, responsible for the reporting party.
  • Code: Credit
  • Description: SupportProvider Organization that enhances the credit of another organization (similar to guarantor, but may not fully guarantee the obligation).
  • Code: Custodian
  • Description: Organization that maintains custody of the asset represented by the trade on behalf of the owner/principal.
  • Code: Data
  • Description: Submitter Entity submitting the transaction report to the competent authority.
  • Code: Disputing
  • Description: Party Organization that is disputing the trade or transaction.
  • Code: Document
  • Description: Repository A marketplace organization which purpose is to maintain document records. If the intended usage relates to the context of the trade lifecycle, more specific annotations have been defined which might be more appropriate.
  • Code: Executing
  • Description: Broker The (generally sell-side) organization that executed the trade; the price-making party.
  • Code: Executing
  • Description: Entity Entity executing the transaction. If the transaction is executed directly by the reporting party, it will be the reporting party. If it is executed by an execution agent or an affiliated party on behalf of the reporting party, it will be that affiliate or agent.
  • Code: Execution
  • Description: Agent The (generally buy-side) organization that acts to execute trades on behalf of an investor. Typically this is an investment manager or asset manager, and also makes the investment decisions for the investor. If required, a separate InvestmentDecision role can be specified to distinguish that the party making the investment decision is different.
  • Code: Execution
  • Description: Facility The facility, exchange, or market where the trade was executed. Synonym: Swap Execution Facility, Designated Contract Market, Execution Venue.
  • Code: Guarantor
  • Description: Organization that backs (guarantees) the credit risk of the trade.
  • Code: Margin
  • Description: Affiliate Margin affiliate as defined by U.S. margin and capital rules §23.151.
  • Code: Order
  • Description: Transmitter The entity transmitting the order to the reporting firm. Synonym: Transmitting Firm.
  • Code: Prime
  • Description: Broker The organization that takes on or took on the credit risk for this trade by stepping in between the two economic parties (without a central counterparty clearing mechanism).
  • Code: Prior
  • Description: TradeRepository The trade repository at which the trade was reported previous to the current trade repository.
  • Code: PTRR
  • Description: CompressionProvider A party providing a post trade risk reduction service in the form of compression.
  • Code: PTRR
  • Description: RebalancingProvider A party providing a post trade risk reduction service in the form of portfolio rebalancing.
  • Code: Publication
  • Description: Venue The reporting service (whether trade repository, market data service, or exchange/facility/venue data distribution service) that published the report of this trade.
  • Code: Reporting
  • Description: Party The party with the regulatory responsibility to report this trade.
  • Code: Reporting
  • Description: PartyAffiliate Organization offiially attached to the reporting party e.g. partner, branch, subsidiary.
  • Code: Reporting
  • Description: PartyUltimateParent The topmost entity or organization, within the corporate hierarchy, responsible for the reporting party.
  • Code: Seller
  • Description: A counterparty in a trade, which performs in one of the following capacities: 1) it transfers or agrees to transfer in the future an instrument or title to that instrument in exchange for payment, 2) it writes a derivatives instrument such as an option or a swap in which it provides risk protection to the buyer. This does not refer to the broker/dealer or other organization on what is typically called the "Sell side"; for that, see the "Executing Broker" role. Corresponds to "Seller" as defined in certain regulations such as ESMA MiFID II/MIFIR RTS 22 field 16.
  • Code: Seller
  • Description: DecisionMaker The party or person who, having legal authority to act on behalf of the trade counterparty acting as Seller as defined in this coding scheme, made the decision to sell the financial instrument. Corresponds to "seller decision maker" as defined in ESMA's MIFIR RTS 23 report. This does not refer to the decision maker for what is traditionally called the "Sell side"; for that, see the "Trader" person role.
  • Code: Settlement
  • Description: Agent The organization that makes or receives payments on behalf of the given principal party.
  • Code: Trade
  • Description: Repository An organization that maintains records of the trade for regulatory reporting purposes.
  • Code: Trade
  • Description: Source The organization that originally supplied the record of the trade. In the context of regulatory reporting, it is the submitter of the trade record to a regulator or TR.
  • Code: Trading
  • Description: Manager The entity responsible for managing the assets/investments of this party. Synonnym: Asset Manager, Investment Manager, Trading Advisory.
  • Code: Trading
  • Description: Partner An entity with which this party trades from time to time, ie. with which it acts as a counterparty on some transactions. This role is used for static reference data, not individual transactions.

Accounts need to have a human or robot field   code

Look for correct terminology (actor type?).

Add sub-menus to shell   code

There are a lot of entries in the shell main menu, we need to group them. These could be based on the component.

Notes:

  • shell has no parties or counterparties commands.
  • It should be possible to set the "output type" or format in the shell, from json to table. Find the correct terminology for this.

System provisioner needs an icon   code

Dialog is using default ORE icon.

Add a organisation type scheme entity   code

Indicates a type of organization.

  • Obtained on 2016-06-13
  • Version 2-0
  • URL: http://www.fpml.org/coding-scheme/organization-type-2-0.xml
  • Code: MSP
  • Name: Major Swap Participant
  • Description: A significant participant in the swaps market, for example as defined by the Dodd-Frank Act.
  • Code: NaturalPerson
  • Name: Natural Person
  • Description: A human being.
  • Code: non-SD/MSP
  • Name: Non Swap Dealer or Major Swap Participant
  • Description: A firm that is neither a swap dealer nor a major swaps participant under the Dodd-Frank Act.
  • Code: SD
  • Name: Swap Dealer
  • Description: Registered swap dealer.

Add currency and country party-visibility junctions   code

Add junction tables (party_currencies, party_countries) that control which currencies and countries a given party can see and use. The underlying reference data definitions remain shared at the tenant level; only visibility is per-party.

  • Acceptance Criteria
    • Junction tables created with proper constraints.
    • Party-scoped queries for currencies and countries filter through junctions.
    • Default: all currencies/countries visible to all parties (opt-in restriction).

Update currency details to use tabs   code

We need a main tab with the currency related properties, then a "system" tab with temporal data which is read-only even on edit and a "image" tab with the image used to represent the currency. It could also contain some description or notes.

Currencies: Country relationship   code

At present currencies are linked to flags directly via an image id. We should also be linked to a country, where such a linkage exists.

Currencies: Currency Taxonomy Enrichment   code

  • Goal: Enhance the oresrefdatacurrenciestbl to support logical grouping, reporting, and basic behavior defaults.
  • Requirements:
    • Field 1: assetclass (Enum/Varchar) Purpose: Defines the nature of the instrument. Allowed Values: fiat, commodity (for XAU/XAG), synthetic, supranational.
    • Field 2: markettier (Enum/Varchar) Purpose: Drives UI priority, liquidity expectations, and risk limits. Allowed Values: majors (g10), emerging, exotic, frontier, historical.
  • Example Entries:

    USD | fiat | g10
     BRL | fiat | emerging
    

Currencies: Party-Specific Settlement Rules   code

  • Goal: Define the "Where." This models what a specific "House" entity (or child) is capable of doing.
  • Table: ores_refdata_settlement_capabilities.
  • Logic: Does "London Desk" have a local Vostro account for "BRL"?
Field Name Type Description
capabilityid UUID PK
partyid INT/FK Reference to internal parties. This can be "The House" or a specific child entity.
currencycode CHAR(3)/FK Reference to ores_refdata_currencies_tbl.
cansettlephysically Boolean If true we can settle in this currency.
cancashsettle Boolean If true, we can settle in cash on another currency.
defaultsettlementmode ENUM physical (Deliverable) or cashsettled (Non-Deliverable/NDF).
isactive BOOL To disable capabilities if an account is frozen or closed.

Then, to determine which account to use to settle we need something like ores_refdata_party_nostro_accounts:

Field Description
partyid Which branch of yours owns this account?
currencycode The currency of the account.
agentbankid The external bank (e.g., HSBC, Citi).
accountnumber The actual IBAN/Account string.
isprimary Boolean to tell the system which one to use by default.
purpose "ENUM: General, CLS, Intercompany, Custody."

This tells us what account to use when settlement comes. We cannot have a capability without at least one account.

Then, for each currency which can be cash settled, we need to know into which currencies they can be cash settled. For that we have ores_refdata_settlement_cash_options. This is a Junction Table that defines the valid "Settlement Pairs." This table answers: "If we are cash-settling Currency X, what are the allowed 'Destination' currencies for this House Entity?"

Schema FieldType Description
capabilityid FK Links to ores_refdata_settlement_capabilities.
settlementcurrency CHAR(3) The "Destination" currency (e.g., USD, EUR).
isprimary BOOLEAN The default currency the desk prefers to use for NDFs.
fixingsourceid FK Which index provides the rate to convert X to the Settlement CCY?

If cash settled is true, we must have a corresponding entry in this table.

  • The "Relative" Logic Requirement:

    The system must perform a Hierarchical Lookup: Check for a rule specific to the Child Party + Currency. If not found, check for a rule for The House + Currency. If still not found, default to physical for g10 and cashsettled for exotic.

Currencies: Contractual Settlement (The "Agreements" Table)   code

  • Goal: Define the "How." This is the bridge you mentioned for specific counterparties.
  • Table: ores_refdata_settlement_agreements.
  • Logic: "Even though London can settle NDFs, for this specific Hedge Fund, we always settle in USD."

Fields:

Schema Description
partyid Our side.
counterpartyid Their side.
currencycode The currency.
agreedsettlementtype NDF, Physical, NetUSD.
agreementref e.g., "ISDA-2023-GS".

Add trade import support to shell   code

Add a shell command for importing ORE portfolio XML files. This should use the existing ores.ore importer and trade_mapper infrastructure to parse the portfolio file and create trades.

The shell command should accept:

  • Portfolio XML file path
  • Target book ID (required, since ORE has no book concept)
  • Optional counterparty mapping (e.g., --counterparty CPTY_A=<uuid>)

Acceptance criteria:

  • Shell command imports trades from an ORE portfolio XML file.
  • Trades are assigned to the specified book.
  • Counterparty mapping is supported.

Add trade import support to HTTP API   code

Add an HTTP endpoint for importing ORE portfolio XML files. This should use the existing ores.ore importer and trade_mapper infrastructure.

The endpoint should accept a multipart form with:

  • The portfolio XML file
  • Target book ID
  • Optional counterparty mappings as JSON

Acceptance criteria:

  • HTTP endpoint imports trades from an ORE portfolio XML file.
  • Returns a summary of imported trades (count, any validation errors).

Investigate dashql for ideas   code

Associating annotations with entities   code

It should be possible to associate annotations with any entity in the system. For example, with books, portfolios, trades, etc. This could be a very simple table with a UUID, a text field, and the usual timestamps. Then, we could add a notes tab to entities which just loads all notes matching that UUID and shows them chronologically. Need to be temporal but the display is as a list:

  • ID: UUID of the note.
  • Entity ID: UUID of the entity being annotated
  • Text: Annotation text.
  • User: Modified by of the entry.
  • Time: valid from of the entry.

Restore list windows after login   code

After login, reopen all entity list windows that were open when the user last closed the application. Save the set of open list windows and their geometry in QSettings on close, then call showListWindow() on the corresponding controllers after onLoginSuccess().

This is the simplest and highest-value phase of session restore: list windows need no entity-specific data, just a controller type identifier. The existing showListWindow() API handles everything.

Files to modify:

  • MainWindow.hpp/cpp - add saveWindowSession() / restoreWindowSession(); save in closeEvent(), restore at end of onLoginSuccess()
  • EntityController.hpp/cpp - add virtual method to enumerate open windows as serializable records (controller type + window key + geometry)

Also cover the 5 special non-controller windows (event viewer, shell, data librarian, telemetry viewer, connection browser).

Acceptance criteria:

  • Open several list windows, close the app, reopen and log in: same windows reappear
  • Window geometry (size, position within MDI area) is restored
  • Special non-controller windows are also restored
  • No windows open if settings are empty (first launch)

Restore detail and history windows after login   code

Extend session restore to reopen detail and history windows for specific entities. This is significantly more complex than list-only restore because detail/history windows receive fully-hydrated domain objects, but on restore we only have an identifier string.

Each controller needs a new async path:

  • restoreDetailWindow(identifier) - fetch entity by code/UUID from server, then call showDetailWindow(entity)
  • restoreHistoryWindow(identifier) - fetch entity, then call showHistoryWindow(entity)

For text-PK entities (currencies, countries, business centres, etc.), the identifier is the code string. For UUID-PK entities (parties, counterparties, books, portfolios, etc.), the identifier is the UUID string.

Files to modify:

  • EntityController.hpp/cpp - add virtual restoreWindowFromSession(type, identifier)
  • All ~28 *Controller.cpp files - implement the async fetch + show path
  • MainWindow.cpp - extend restore logic to handle detail/history window types

Depends on: "Restore list windows after login"

Acceptance criteria:

  • Detail and history windows for previously-viewed entities are restored
  • If an entity no longer exists on the server, the window is silently skipped
  • Version windows (read-only historical versions) are also restored

Restore detached window state after login   code

Extend session restore to preserve whether windows were detached (floating) or attached to the MDI area. Save the detach state and screen-absolute geometry for detached windows. On restore, recreate detached windows as floating Qt::Window instances at their saved screen positions.

Also handle edge cases:

  • Saved geometry references a monitor that no longer exists: fall back to attaching the window to the MDI area
  • Per-user session storage using session/<username>/windows QSettings key, so different users get their own workspace layout

Files to modify:

  • MainWindow.cpp - extend save/restore to include detach state and absolute geometry
  • DetachableMdiSubWindow.hpp/cpp - add isDetached() accessor if not present; add restoreDetachedState() method

Depends on: "Restore detail and history windows after login"

Acceptance criteria:

  • Detached windows reappear as floating windows at their saved screen positions
  • Attached windows reappear inside the MDI area at their saved positions
  • Invalid screen geometry gracefully falls back to MDI-attached
  • Different usernames get independent session layouts

Methodology screen review   code

  • make name column bigger.
  • do not show description and URI by default.
  • use tabs in detail window
  • show all meta-data in details window.

Librarian errors   code

When there is a failure publishing a dataset we just see "failed" in the wizard without any further details. Server log file says:

2026-01-21 22:21:07.676351 [DEBUG] [ores.dq.service.publication_service] Publishing dataset: slovaris.currencies with artefact_type: Solvaris Currencies
2026-01-21 22:21:07.676381 [ERROR] [ores.dq.service.publication_service] Unknown artefact_type: Solvaris Currencies for dataset: slovaris.currencies
2026-01-21 22:21:07.676412 [ERROR] [ores.dq.service.publication_service] Failed to publish slovaris.currencies: Unknown artefact_type: Solvaris Currencies
2026-01-21 22:21:07.676437 [INFO] [ores.dq.service.publication_service] Publishing dataset: slovaris.country_flags (Solvaris Country Flag Images)
2026-01-21 22:21:07.676460 [DEBUG] [ores.dq.service.publication_service] Publishing dataset: slovaris.country_flags with artefact_type: Solvaris Country Flag Images
2026-01-21 22:21:07.676491 [ERROR] [ores.dq.service.publication_service] Unknown artefact_type: Solvaris Country Flag Images for dataset: slovaris.country_flags
2026-01-21 22:21:07.676518 [ERROR] [ores.dq.service.publication_service] Failed to publish slovaris.country_flags: Unknown artefact_type: Solvaris Country Flag Images
2026-01-21 22:21:07.676542 [INFO] [ores.dq.service.publication_service] Publishing dataset: slovaris.countries (Solvaris Countries)
2026-01-21 22:21:07.676566 [DEBUG] [ores.dq.service.publication_service] Publishing dataset: slovaris.countries with artefact_type: Solvaris Countries
2026-01-21 22:21:07.676592 [ERROR] [ores.dq.service.publication_service] Unknown artefact_type: Solvaris Countries for dataset: slovaris.countries
2026-01-21 22:21:07.676618 [ERROR] [ores.dq.service.publication_service] Failed to publish slovaris.countries: Unknown artefact_type: Solvaris Countries

To reproduce, change artefact type in codegen back to "Solvaris Currencies".

General session dialog   code

At present we can't see a dialog with sessions for all users. We need to go to accounts to see a specific user session. We need to modify this dialog to be able to show either all sessions or sessions for a specific user.

Notes:

  • it should be possible to kick out a user or selection of users.
  • it should be possible to send a message to a user or all users.
  • session icon is just a circle
  • add paging support.

Issues with event viewer   code

  • no icon.
  • can't filter by event type.
  • always collect events in ring buffer. Search for story on this.

Add action type to trades   code

Seems like FPML has some kind of trade activity like actions:

Improve tag support   code

At present we are not tagging DQ entities very well. For example, crypto currencies should be tagged as both crypto and currencies, etc.

Also tags have duplicates, and versioning does not seem to be working:

ores_frosty_leaf=> select * from dq_tags_artefact_tbl;
              dataset_id              |                tag_id                | version |      name      |              description
--------------------------------------+--------------------------------------+---------+----------------+---------------------------------------
 93d187a9-fa26-4569-ab26-18154b58c5c7 | 65bd2824-bd43-4090-9f1a-a97dfef529ca |       0 | flag           | Country and region flag images
 c8912e75-9238-4f97-b8da-065a11b8bcc8 | 75ee83a7-2c54-448c-b073-8d68107d136e |       0 | cryptocurrency | Cryptocurrency icon images
 30c0e0b8-c486-4bc1-a6f4-19db8fa691c9 | 2d3eace5-733c-47b3-b328-f99a358fe2a8 |       0 | currency       | Currency reference data
 d8093e17-8954-4928-a705-4fc03e400eee | 71e0c456-94fb-4e44-83f1-33ae1139e333 |       0 | currency       | Non-ISO currency reference data
 d3a4e751-ae30-497c-96b1-f727201d536b | c57ee434-bbc2-48c1-9a59-c9799e701288 |       0 | cryptocurrency | Cryptocurrency reference data
 44ff5afd-a1b7-42d2-84a7-432092616c40 | d6453cf6-f23b-4f97-b600-51fcad21c8aa |       0 | geolocation    | IP address geolocation reference data

We need a generic tags table and then a junction between say datasets and tags, etc. Delete all of the existing half-baked tags implementations. Also have a look at the story in backlog about tags and labels.

Authentication failed dialog does not have details   code

At present we show the C++ exception:

Authentication failed: Failed to connect to server: Connection refused [system:111 at /home/marco/Development/OreStudio/OreStudio.local1/build/output/linux-clang-debug/vcpkginstalled/x64-linux/include/boost/asio/detail/reactivesocketconnectop.hpp:97:37 in function 'static void boost::asio::detail::reactivesocketconnectop<boost::asio::detail::rangeconnectop<boost::asio::ip::tcp, boost::asio::anyioexecutor, boost::asio::ip::basicresolverresults<boost::asio::ip::tcp>, boost::asio::detail::defaultconnectcondition, boost::asio::detail::awaitablehandler<boost::asio::anyioexecutor, boost::system::errorcode, boost::asio::ip::basicendpoint<boost::asio::ip::tcp>>>, boost::asio::anyioexecutor>::docomplete(void *, operation *, const boost::system::errorcode &, std::sizet) [Handler = boost::asio::detail::rangeconnectop<boost::asio::ip::tcp, boost::asio::anyioexecutor, boost::asio::ip::basicresolverresults<boost::asio::ip::tcp>, boost::asio::detail::defaultconnectcondition, boost::asio::detail::awaitablehandler<boost::asio::anyioexecutor, boost::system::errorcode, boost::asio::ip::basicendpoint<boost::asio::ip::tcp>>>, IoExecutor = boost::asio::anyioexecutor]']

Add details button.

Ensure DQ dataset checks use code   code

We are still checking for Name:

-- Get the flags dataset ID (for linking images)
select id into v_flags_dataset_id
from ores.dq_datasets_tbl
where name = 'Country Flag Images'
  and subject_area_name = 'Country Flags'
  and domain_name = 'Reference Data'
  and valid_to = ores.utility_infinity_timestamp_fn();

if v_flags_dataset_id is null then
    raise exception 'Dataset not found: Country Flag Images';
end if;

Create subsets of datasets   code

In some cases we may just want to publish a subset of a dataset. For example, Majors, G11, etc. Or maybe these are just separate datasets?

In fact that is what they are. Break apart the larger sets - in particular currencies, countries, cryptos.

Management of roles   code

At present we have system level roles. This is not ideal, you may want to delete roles, add them etc. Do some analysis on the best way to implement these. We could have curated datasets for roles as well. Admin is the exception.

Notes:

  • should be possible to see which accounts have what roles.

Publish history dialog is non-standard   code

  • always on top.
  • no paging.

Add purge button to all entities   code

We should be able to completely trash all data. We probably need a special permission for this but admin should be able to do it. Ideally all entity dialogs should have a purge action.

We should also have a "purge all" button which purges all data from all tables - ignores roles etc. This could be available on the data librarian.

Improve icon for methodology and dimensions   code

At present we have icons which are not very sensible. For methodology we could use something that reminds one of a laboratory.

Add coloured icons   code

At present we are using black and white icons. These are a bit hard to see. We should try the coloured ones and see if it improves readability.

Message handlers are not scalable   code

Claude just mentioned this in passing:

Read(projects/ores.dq/src/messaging/dqmessagehandler.cpp) ⎿  Error: File content (25665 tokens) exceeds maximum allowed tokens (25000). Please use offset and limit parameters to read specific portions of the file, or use the GrepTool to search for specific content. ● Read(projects/ores.dq/src/messaging/dqmessagehandler.cpp) ⎿  Read 200 lines

This will not work when we have hundreds of types in a component. We need to split these files by message type.

Generate C++ code for FPML   code

We need to fix any limitations we may have in xsdcpp.

Analysis on database name service   code

Is this used? If so, the service should not be connecting to the admin database.

External data issues   code

Problems observed:

  • missing downloaded_at for a lot of data.
  • spurious manifest.txt, we should only have manifest.json.
  • duplication of data in catalog and main manifest. The manifest is the catalog. Remove duplication.
  • for github downloads, add git commit.
  • not clear who "owner" is. It has to map to an account or group in the system.
  • datasets have a catalog, but they shoud be forced to use the catalog of the manifest:
"catalog": "Open Source Risk Engine",
  • need a domain for data such as XML Schemas.
  • we should ensure the methodology is one of the defined methodologies in the file.
  • since datasets refer to data in subdirectories, we should add the directory to the manifest. Not needed in DB.

Listener error in comms service   code

Investigate this:

2026-01-22 20:40:20.194383 [DEBUG] [ores.comms.net.connection] Successfully wrote frame
2026-01-22 20:40:20.194413 [INFO] [ores.comms.net.server_session] Sent notification for event type 'ores.refdata.currency_changed' with 1 entity IDs to 127.0.0.1:49354
2026-01-22 20:40:21.698972 [ERROR] [ores.eventing.service.postgres_listener_service] Connection error while consuming input.
2026-01-22 20:40:21.699059 [INFO] [ores.eventing.service.postgres_listener_service] Listener thread stopped.

Add database support for tests   code

We could create tables for logging:

  • test suite, test case. contain the names of test suites and test cases.
  • test suite run, test case run. specific runs. useful to get details of timing. Is logging enabled, etc. Tenant ID. Debug or release.
  • time series for test duration, test suite duration.
  • we could add a simple UI to ores.qt to see what the tests are doing over time.
  • test count charts.

Notes:

  • if we keep destroying the environment to test scripts we will lose valuable historical test data. Maybe we should have a "test database" which we destroy infrequently. In truth we just care about the test tables. So we use the test database for telemetry etc but the environment database to exercise the tests. Or perhaps we just need to capture some data into a time series DB. For example ores_test_local1.
  • however, we want the service to be able to connect to the test database, so that we can display test related information in the UI.
  • test database should link back to tenant ID in case we want to inspect data.
  • test failures are much easier to investigate, we can just browse the UI for failed test and see it's log.
  • also upload information from Catch2 XML including exceptions etc.
  • group tests by component.
  • ideally we want all environments to write to the same test database. We want to compare data across environments. But we need to know which run is which.
  • log the git commit version and if it's dirty or not.
  • we probably need a separate story for this but we should look into adding code coverage as well.
  • Claude can then plug into all of this information. For a given test run, we could generate a summary report and then have Claude analyse it.

Instrument components with telemetry context   code

Now that logging has been integrated with telemetry, the next step is to instrument key components with the TLOG_SEV macro to enable trace correlation.

Tasks:

  • Instrument server_session with root span on connection, child spans per request.
  • Instrument client_session with spans for outgoing requests.
  • Pass telemetry_context through message handlers in ores.iam, ores.refdata, etc.
  • Add spans for database operations in ores.database.

Create span collection and export infrastructure   code

The telemetry component has span types defined but no infrastructure to collect and export completed spans.

Tasks:

  • Create span_collector interface for accumulating completed spans.
  • Implement span_exporter interface (similar to log_exporter).
  • Create file_span_exporter that writes spans as JSON Lines.
  • Integrate span export with lifecycle_manager or create telemetry_provider.

Add a sample set of gravatars for profiles   code

We've downloaded the Human Profile Photos Dataset. The images are not labelled (img123.jpg etc). There are also too many of them. We need to label them so we can use them to generate synthetic profiles to test the system and to ensure diversity. We probably need the following classes: race (course approximation is fine), gender, age (again course approximation - kid, teenager, adult, senior or some such classification). Then we need to create a subset of the dataset which is a representative sample.

Gemini script that uses ollama:

import ollama
import os
import pandas as pd
import json

# Configuration
IMAGE_FOLDER = './human-profile-photos' # Path to your dataset
OUTPUT_FILE = 'labeled_dataset.csv'
MODEL = 'llama3.2-vision'

# Prompt designed for structured output
PROMPT = """
Analyze the person in this image and provide a JSON response with exactly these keys:
- "race": (e.g., White, Black, Asian, Hispanic, etc.)
- "gender": (Male, Female)
- "age_group": (Kid, Teenager, Adult, Senior)

Return ONLY the JSON object.
"""

def label_images():
    results = []
    image_files = [f for f in os.listdir(IMAGE_FOLDER) if f.lower().endswith(('.jpg', '.jpeg', '.png'))]

    print(f"Starting labeling for {len(image_files)} images...")

    for filename in image_files:
        path = os.path.join(IMAGE_FOLDER, filename)

        try:
            response = ollama.chat(
                model=MODEL,
                format='json', # Forces the model to output valid JSON
                messages=[{
                    'role': 'user',
                    'content': PROMPT,
                    'images': [path]
                }]
            )

            # Parse the response
            data = json.loads(response['message']['content'])
            data['filename'] = filename
            results.append(data)
            print(f"Labeled: {filename}")

        except Exception as e:
            print(f"Error processing {filename}: {e}")

    # Save to CSV
    df = pd.DataFrame(results)
    df.to_csv(OUTPUT_FILE, index=False)
    print(f"Done! Results saved to {OUTPUT_FILE}")

if __name__ == "__main__":
    label_images()

Add staging support   code

We need to add support for staging for all entities, in preparation of authorisation queue. We probably should just call this "authq" rather than staging.

Notes:

  • server side writes to staging table instead of production table. Write contains signature.
  • user opens authq for an entity and sees entry. Authorises, which signs the row. If there are enough signatures, row is promoted into production with the last signature. This happens via stored proc which checks where we are in the state machine. If we have finished, we mark the row in staging as completed and copy it into production.
  • end users open the entity dialog. This shows all live rows (e.g. those in production) plus recent deletes, plus "pre-live" rows which are rows waiting for authorisation.

Add support for row signing   code

It would be good to have users sign the changes they make.

Gemini:

# Specification: Zero-Knowledge Row-Level Data Signing

## 1. Objective

Implement a system within PostgreSQL and a Client-side application to ensure
row-level data integrity using digital signatures. The system must support
**schema evolution** (changing which columns are signed) and **zero-knowledge
key management** (the server never sees the user's plain-text private key).

## 2. Core Components

### A. The Signature Registry (The "Recipe")

To handle schema changes, we use a versioned registry.

- **Table:** `signature_registry`
- **Purpose:** Defines which columns constitute the "canonical payload" for a
  specific version.
- **Mechanism:** A function/query that takes a table row, filters it by the
  versioned column list, and produces a **Deterministic Canonical JSON**
  (alphabetically sorted keys, consistent date formats).

### B. Zero-Knowledge Key Management

- **Storage:** A table (e.g., `user_keys`) stores the user's **Public Key** and
  an **Encrypted Private Key** (the "Blob").
- **Process:** The client encrypts the Private Key using a key derived from the
  user's password before upload.
- **Temporal Logic:** This table is **Temporal** (using System-Period
  Versioning). This ensures that every historical Public Key is preserved,
  allowing us to verify old signatures even after a user rotates their keys.

### C. The Signing Workflow

- 1. **Client** retrieves the "Active" Registry Version and their Encrypted
  Private Key.
- 2. **Client** decrypts the Private Key locally.
- 3. **Client** constructs the Canonical JSON from the row data based on the
  Registry Recipe.
- 4. **Client** signs the JSON and sends the record to the DB along with the
  `signature` and `registry_version_id`.

## 3. Data Integrity Schema

The target data tables must include:

- `signature` (BYTEA/TEXT): The cryptographic result.
- `sig_version_id` (FK): Links to `signature_registry`.
- `key_version_id` (FK): Links to the specific version in the **Temporal Key
  Table** to prevent clock-drift issues during verification.

## Point 4: Unified Identity & System Signing

While users sign with asymmetric keys (RSA/ECDSA), system processes—like
automated batch updates or background tasks—use a high-speed **HMAC (Hash-based
Message Authentication Code)** approach. This allows "System Accounts" to
participate in the same signature infrastructure without the overhead of
public/private key pairs.

- **The Machine Secret:** The system uses a 256-bit secret key (stored as an
  environment variable or a database configuration parameter).
- **The "Secret" Signature:** When a system account updates a row, it generates
  a signature using `HMAC_SHA256(canonical_json, system_secret)`.
- **Verification:** The verification query treats the "Signature" column as
  polymorphic. If the row belongs to a user, it uses the **Public Key**; if it
  belongs to a system account, it re-calculates the **HMAC** using the internal
  secret to verify integrity.

## Point 5: The Checkpointed Merkle Chain (Timeline Lock)

To prevent "Ghost Deletions" (where an attacker deletes an entire row) or
"History Rewriting," the system implements a Merkle-style chain. This creates a
mathematical dependency between rows, making it impossible to change one record
without breaking every subsequent record.

- **Row Chaining:** Every row includes a `row_hash` (generated by the fast
  **XXH3** 64-bit algorithm). This hash is a combination of:
  `Hash(Current_Data + Current_Signature + Previous_Row_Hash)`.
- **The Genesis Salt:** The very first row in the database uses a well-known
  value (`0xFFFFFFFFFFFFFFFF`) as its "Previous Hash" to start the chain.
- **Periodic Checkpointing:** Every 1,000 records (or every hour), a background
  service calculates a **Master Checkpoint Hash** (an HMAC of all `row_hash`
  values in that block).
- **The Audit Log:** This Master Hash is stored in a separate, append-only
  `audit_checkpoints` table. This acts as a "Timeline Lock"—once a checkpoint is
  written, the previous 1,000 rows are effectively "sealed" and cannot be
  modified or deleted without the system detecting a break in the chain.

## 6. Implementation Requirements for Claude

- 1. **Database Layer:** * Create the `signature_registry` and temporal `user_keys` tables.
  - Provide a PostgreSQL function to generate a sorted Canonical JSON string
    from a record given a `version_id`.
- 2. **Client Layer (JS/Node/Python):** * Logic to derive an encryption key from
  a password (PBKDF2/Argon2).
  - Logic to sign the Canonical JSON using the decrypted Private Key.
- 3. **Verification Layer:**
  - A query that joins the Data Table with the Temporal Key History and the
    Registry to re-generate the hash and verify it against the stored signature.

### The Final "Story" Summary for Claude:

> "I am building a PostgreSQL-based row-level integrity system. Users sign rows
> using an **E2EE Private Key** stored as an encrypted blob. System accounts use
> an **HMAC secret** to sign their changes. To prevent row deletion or tampering
> with history, I am using an **XXH3-based Merkle Chain** where each row points
> to the hash of the previous row. Finally, a background service creates **HMAC
> Checkpoints** of these hashes to provide a verifiable audit trail of the
> database timeline. Please implement the SQL triggers and the verification
> logic for this architecture."

Notes:

  • the system should be designed in such a way that any table which requires signing can also require multiple signatures. For example, the query includes the signature field itself. Then the next signature has a "parent ID" (look for proper technical term). The main table just joins against the last signature but we can follow the trail back. The state machine determines how many signatures are required to promote from staging to main table.

Support tenant specification in shell account commands   code

Currently, shell commands like accounts info <username> only search within the current session's tenant. This creates a UX gap for SuperAdmins who need to manage accounts across multiple tenants.

Affected commands:

  • accounts info
  • accounts roles
  • accounts permissions
  • accounts assign-role
  • accounts revoke-role

Acceptance criteria:

  • Commands should accept an optional tenant identifier (hostname or tenant ID)
  • Format could be accounts info user1@acme.localhost or accounts info user1 --tenant acme.localhost
  • SuperAdmins should be able to view/manage accounts in any tenant
  • Regular users should only be able to view accounts in their own tenant
  • Clear error message when user lacks cross-tenant permissions

Stop and start service does not trigger ores.qt reconnection   code

We seem to just remain in a weird disconnected state. Clicking disconnect seems to freeze UI.

Consider using UUID in the database layer   code

At present we are using std::string and then mapping to boost UUID.

Generate protocol docs via codegen   code

At present protocol documentation is very behind. We are not consistently generating it. Also, we keep making manual changes to document. We need to split it into two documents, the overview document which we maintain manually, and the list of versions, messages etc which is automatically generated. We should use mustache.

Improve error message boxes   code

Rename show details to just details. Icon should be red?

Use exponential backoff for database problems   code

At present we have a constant retry:

2026-02-08 14:56:43.670204 [INFO] [ores.comms.service.app.application] Database still unavailable, retrying in 5 seconds… 2026-02-08 14:56:48.685224 [INFO] [ores.comms.service.app.application] Database still unavailable, retrying in 5 seconds… 2026-02-08 14:56:53.698886 [INFO] [ores.comms.service.app.application] Database still unavailable, retrying in 5 seconds… 2026-02-08 14:56:58.712379 [INFO] [ores.comms.service.app.application] Database still unavailable, retrying in 5 seconds… 2026-02-08 14:57:03.726010 [INFO] [ores.comms.service.app.application] Database still unavailable, retrying in 5 seconds…

Notes:

  • we should allow clients to connect and return an error stating DB is unavailable.
  • stop service does not show any log lines about shutting down.

Add composite primary keys   code

We probably already have a story for this.

projects/ores.iam/include/ores.iam/repository/accountpartyentity.hpp

sqlgen::PrimaryKey<std::string> accountid; std::string tenantid; std::string partyid; Contributor @gemini-code-assist gemini-code-assist bot 15 hours ago high

The comment on line 32 indicates a composite primary key. For a junction table, both accountid and partyid should be part of the composite primary key. partyid should also be wrapped in sqlgen::PrimaryKey to correctly define the composite key.

Suggested change std::string partyid; sqlgen::PrimaryKey<std::string> partyid; Member Author @mcraveiro mcraveiro 5 hours ago

Won't fix. This follows the established junction entity pattern in the codebase (see datasetbundlememberentity.hpp): the left column uses sqlgen::PrimaryKey<std::string> and the right column uses plain std::string. The sqlgen library handles composite keys this way — the actual SQL DDL defines both columns as the composite primary key via the migration script, not via the C++ entity annotation.

Fix "informations" plural naming in contact information entities   code

The plural of "information" is "information" (uncountable noun). Several codegen models and generated files incorrectly use "informations":

  • party_contact_information_domain_entity.json: entity_plural is party_contact_informations (should be party_contact_information).
  • counterparty_contact_information_domain_entity.json: same issue.
  • Repository section: entity_plural_short and entity_plural_words also affected.

This naming propagates to C++ class names, function names, SQL table references, and protocol message names. Fixing requires:

  1. Update the codegen JSON models.
  2. Regenerate affected C++ files.
  3. Update SQL table names and population scripts if they use the plural form.
  4. Verify no downstream references break.

Acceptance criteria:

  • No "informations" appears in model files, generated code, or SQL scripts.
  • Build and tests pass after renaming.

Fix security bug with shell   code

As reported by Gemini:

projects/ores.qt/include/ores.qt/ClientManager.hpp Comment on lines +302 to +305 /**

  • @brief Get the stored password used for the current session.

*/ std::string storedPassword() const { return storedpassword_; } Contributor @gemini-code-assist gemini-code-assist bot 2 hours ago security-critical critical

Exposing the stored password via a public getter storedPassword() introduces a significant security risk. Storing raw passwords in memory after authentication is already a sensitive practice, but providing a public accessor increases the attack surface within the application. Any component that can access the ClientManager can now retrieve the user's password in plain text.

A more secure approach should be considered:

Token-based Authentication: The main session could request a short-lived, single-purpose token from the server, which the shell window can then use to authenticate its own session. This avoids passing the raw password around. Session Management within ClientManager: Instead of the ShellMdiWindow creating its own session, ClientManager could be responsible for creating and providing a fully authenticated clientsession object to the shell window, without ever exposing the credentials used to create it.

Create a AAD build of ORE   code

Requires further investigation.

Links:

  • GH: QuantLibAAD: "As a demonstrator of integration of the XAD automatic differentiation tool with real-world code, the latest release of QuantLib can calculate risks (sensitivities) with the help of XAD. The performance achieved on sample applications is many-fold superior to what has been reported previously with other tools. This demonstrates production quality use of the XAD library in a code-base of several hundred thousand lines."
  • GH: xad: "XAD is a high-performance C++ automatic differentiation library designed for large-scale, performance-critical systems. It provides forward and adjoint (reverse) mode automatic differentiation via operator overloading, with a strong focus on: Low runtime overhead; Minimal memory footprint; Straightforward integration into existing C++ codebases. For Monte Carlo and other repetitive workloads, XAD also offers optional JIT backend support, enabling record-once / replay-many execution for additional performance boost."
  • GH: forge: "Forge compiles mathematical expressions to optimized x86-64 machine code with automatic gradient computation. It follows a record-once, compile-once, evaluate-many paradigm designed for workloads where the same computation is repeated with varying inputs."
  • GH: xad-forge: "Forge JIT backends for XAD automatic differentiation. This library implements JIT backends for XAD using the Forge C API as the code generation engine. When XAD records a computation graph, xad-forge compiles it to native x86-64 machine code for fast re-evaluation."
  • [Quantlib-users] QuantLibAAD: AAD with JIT Replay (Record-Once / Replay-Many): email discussing XAD and montecarlo.

Add ores.compute or ores.grid   code

Analysis with Gemini:

This refined agile story provides a deep dive into the specific fields and relational constraints discussed. It maintains the Org-mode structure and strictly follows the BOINC nomenclature.

  • Epic: Financial Compute Grid Implementation Details
    • Feature: Data Model & Entity Definition

      To provide a robust, strictly-typed orchestration layer, we must implement the following entities within the PostgreSQL instance.

      • 1. The Host (Node) Entity

        Represents the physical or virtual compute resource.

        • Table: hosts
        • Key Fields:
          • id: UUID (Primary Key).
          • external_id: TEXT (User-defined name/hostname).
          • location_id: INTEGER (FK to site/region table).
          • cpu_count: INTEGER (Total logical cores).
          • ram_mb: BIGINT (Total system memory).
          • gpu_type: TEXT (e.g., 'A100', 'None').
          • last_rpc_time: TIMESTAMPTZ (Last heartbeat from the Node).
          • credit_total: NUMERIC (Total work units successfully processed).
      • 2. The Application & Versioning (App Executable)

        Defines the "What" – the engine being wrapped.

        • Table: apps
        • Key Fields:
          • id: SERIAL (PK).
          • name: TEXT (e.g., 'ORESTUDIO').
        • Table: app_versions
        • Key Fields:
          • id: SERIAL (PK).
          • app_id: INTEGER (FK).
          • wrapper_version: TEXT (Version of our custom wrapper).
          • engine_version: TEXT (Version of the third-party binary).
          • package_uri: TEXT (Location of the zipped Wrapper + App bundle).
          • platform: TEXT (e.g., 'linuxx8664').
      • 3. The Workunit (Job Template)

        The abstract problem definition. Does not contain results.

        • Table: workunits
        • Key Fields:
          • id: SERIAL (PK).
          • batch_id: INTEGER (FK).
          • app_version_id: INTEGER (FK).
          • input_uri: TEXT (Pointer to zipped financial data/parameters).
          • config_uri: TEXT (Pointer to ORE/Llama XML/JSON config).
          • priority: INTEGER (Higher = sooner).
          • target_redundancy: INTEGER (Default: 1. Set > 1 for volunteer/untrusted nodes).
          • canonical_result_id: INTEGER (Nullable; updated by Validator).
      • 4. The Result (Execution Instance)

        The bridge between the DB and PGMQ.

        • Table: results
        • Key Fields:
          • id: SERIAL (PK).
          • workunit_id: INTEGER (FK).
          • host_id: INTEGER (FK, Nullable until dispatched).
          • pgmq_msg_id: BIGINT (The lease ID from PGMQ).
          • server_state: INTEGER (1: Inactive, 2: Unsent, 4: In Progress, 5: Done).
          • outcome: INTEGER (Status code: Success, Compute Error, Timeout).
          • output_uri: TEXT (Where the Wrapper uploaded the result zip).
          • received_at: TIMESTAMPTZ.
      • 5. The Batch & Assimilator State

        Handles the finance-specific "Batch" requirement and dependencies.

        • Table: batches
        • Key Fields:
          • id: SERIAL (PK).
          • external_ref: TEXT (Link to Finance UI/Project ID).
          • status: TEXT (Open, Processing, Assimilating, Closed).
        • Table: batch_dependencies
        • Key Fields:
          • parent_batch_id: INTEGER (FK).
          • child_batch_id: INTEGER (FK).
        • Table: assimilated_data (TimescaleDB Hypertable)
        • Key Fields:
          • time: TIMESTAMPTZ (Logical time of financial observation).
          • batch_id: INTEGER (FK).
          • metric_key: TEXT (e.g., 'portfoliovar').
          • metric_value: NUMERIC.
      • Workflow Summary Diagram
        • [ ] Schema must enforce that a Result cannot be marked 'Success' without a valid output_uri.
        • [ ] The batch_state must be dynamically computable via a view or updated via trigger to show % completion.
        • [ ] The app_versions table must support "side-by-side" versions for A/B testing risk engines.

Digital signatures   code

As suggested by gemini:

  1. Digital Signatures (Non-Repudiation)

For high-stakes systems, you may want to store a cryptographic hash of the record.

When a Business Unit or Counterparty is updated, generate a hash of the new state.

Sign it with the user's certificate or a system-level key.

If an attacker (or a rogue DBA) tries to modify the temporal history directly in the database, the hash check will fail.

We need to be able to update the schema over time. However, one easy way this could be achieved is by saving a JSON blob with the row at the time of signing (ideally generated by a postgres function), sign that blob and record it. Then in the future we can make sure that those fields we recorded did not change at that version. This should be quite easy to implement. We could also take the git approach and create some kind of tree where all signed documents rely on previously signed documents, making it quite hard to tamper with the system.

Ideally when we create an account we need to associate keys with it.

Add languages to ores.qt   code

To ensure we are not hard-coding English, we should test with one or more languages we can understand. Add Portuguese, Spanish and/or French and check the application looks vaguely correct.

Add DB schema version table   code

At present we have no way of knowing what version of the schema we are running. We should probably look into sqitch.

Links:

  • GH: golang-migrate: "Database migrations written in Go. Use as CLI or import as library."
  • GH: tern: "Tern is a standalone migration tool for PostgreSQL. It includes traditional migrations as well as a separate optional workflow for managing database code such as functions and views."
  • GH: sqitch: "Sqitch is a database change management application."

ORE Sample Data   code

We added examples and XSDs from ORE. We should consider some improvements to this dataset:

  • remove unnecessary files (notebooks, pngs, pdfs, etc).

Add a "is alive" message   code

We need to brainstorm this. At present we can only tell if a server is there or not by connecting. It would be nice to give some visual indicator to the user that the server is not up as soon as the user types the host. this may not be a good idea.

Notes:

  • could tell client if registration / sign-up is supported.

Remove bootstrap guards from validation functions   code

The codegen template sql_schema_table_create.mustache generates a bootstrap guard in every validation function:

if not exists (select 1 from X limit 1) then
    return p_value;
end if;

This pattern silently skips validation when the lookup table is empty, which was originally intended to avoid errors during initial population. However, correct population ordering in foundation_populate.sql makes these guards unnecessary. Worse, they mask real errors: if a lookup table is accidentally empty, inserts silently succeed with invalid data instead of failing loudly.

There are 20 instances across the codebase (16 in refdata, 1 in dq, 3 in iam).

Tasks:

  • Update sql_schema_table_create.mustache to stop generating the bootstrap guard in the validation function template.
  • Regenerate all affected SQL files.
  • Remove bootstrap guards from any hand-written validation functions.
  • Verify foundation_populate.sql ordering ensures all lookup tables are seeded before any dependent table triggers fire.
  • Run full schema recreation and pgTAP tests to confirm nothing breaks.

Change reason and categories need permissions   code

We need some very specific permissions as these are reg-sensitive.

Change reason not requested on delete   code

At present you can delete entities without providing a change reason.

Remember dialog sizes and positions   code

At present we need to resize dialogs frequently. We should write this to QSettings.

Geo-location tests fail for some IP Addresses   code

Error is probably happening because the range is not supposed to be used.

<Catch2TestRun name="ores.geo.tests" rng-seed="3117545047" xml-format-version="3" catch2-version="3.12.0">
  <TestCase name="geolocation_result_default_construction" tags="[service][#geolocation_service_tests]" filename="/home/marco/Development/OreStudio/OreStudio.local1/projects/ores.geo/tests/geolocation_service_tests.cpp" line="38">
    <OverallResult success="true" skips="0" durationInSeconds="1.00031"/>
  </TestCase>
  <TestCase name="lookup_with_boost_asio_address" tags="[service][#geolocation_service_tests]" filename="/home/marco/Development/OreStudio/OreStudio.local1/projects/ores.geo/tests/geolocation_service_tests.cpp" line="90">
    <Expression success="false" type="CHECK" filename="/home/marco/Development/OreStudio/OreStudio.local1/projects/ores.geo/tests/geolocation_service_tests.cpp" line="102">
      <Original>
        result.error() == geolocation_error::address_not_found
      </Original>
      <Expanded>
        2 == 1
      </Expanded>
    </Expression>
    <OverallResult success="false" skips="0" durationInSeconds="1.00013"/>
  </TestCase>

We don't see anything in the logs:

2026-01-09 19:16:19.370327 [INFO] [ores.testing.test_database_manager] Database context created successfully
2026-01-09 19:16:19.370360 [DEBUG] [ores.geo.service.geolocation_service] Geolocation lookup for 10.0.0.1. SQL: SELECT country_code FROM ores.geoip_lookup('10.0.0.1'::inet)
2026-01-09 19:16:19.386331 [DEBUG] [ores.geo.service.geolocation_service] Geolocation lookup for 10.0.0.1. Total rows: 0
2026-01-09 19:16:19.386475 [INFO] [ores.geo.tests] Lookup result for boost::asio::ip::address
2026-01-09 19:16:19.386695 [DEBUG] [catch2] Section ended: lookup_with_boost_asio_address (assertions: 1)
2026-01-09 19:16:19.386741 [INFO] [catch2] Test case ended: lookup_with_boost_asio_address - PASSED
2026-01-09 19:16:19.386767 [INFO] [catch2]   Assertions: 1 passed, 0 failed, 1 total

We should write the assertions in the logs.

Add support for staging   code

  • on import, are we importing one currency at a time? should import the entire file.

Add a flag for human readable time   code

We should be able to use regular timestamps or human readable time.

Retrieve NAT'ed IP address from local IP   code

It would be nice to be able to resolve to the NAT'ed IP address. Gemini:

#include <iostream>
#include <string>
#include <boost/asio.hpp>

using boost::asio::ip::tcp;

std::string get_public_ip() { try { boost::asio::io_context io_context;

    // 1. Resolve the address for api.ipify.org
    tcp::resolver resolver(io_context);
    tcp::resolver::results_type endpoints = resolver.resolve("api.ipify.org", "http");

    // 2. Connect to the server
    tcp::socket socket(io_context);
    boost::asio::connect(socket, endpoints);

    // 3. Formulate the HTTP GET request
    std::string request =
        "GET / HTTP/1.1\r\n"
        "Host: api.ipify.org\r\n"
        "Connection: close\r\n\r\n";

    // 4. Send the request
    boost::asio::write(socket, boost::asio::buffer(request));

    // 5. Read the response
    boost::asio::streambuf response;
    boost::asio::read_until(socket, response, "\r\n");

    // Check the status line (optional but recommended)
    std::istream response_stream(&response);
    std::string http_version;
    unsigned int status_code;
    response_stream >> http_version >> status_code;

    if (status_code != 200) {
        return "Error: HTTP Status " + std::to_string(status_code);
    }

    // Skip the HTTP headers
    boost::asio::read_until(socket, response, "\r\n\r\n");

    // The remaining data in the buffer (and what's left to read) is the IP
    std::string public_ip;
    while (boost::asio::read(socket, response, boost::asio::transfer_at_least(1), boost::system::error_code())) {
        // Keep reading until EOF
    }

    // Convert the body content to a string
    std::stringstream ss;
    ss << &response;
    public_ip = ss.str();

    // api.ipify.org returns only the IP as plain text in the body
    return public_ip;

} catch (std::exception& e) {
    return std::string("Exception: ") + e.what();
}
}

int main() {
        stdout::cout << "Fetching public IP..." << std::endl;
        std::string ip = get_public_ip();
        std::cout << "Your Public IP is: " << ip << std::endl;
        return 0;
}

Users should be able to add picture to profile   code

It would be useful to have avatars. We can then display those in other places.

Assorted UI polish work for detached mode   code

Break these down into their own stories:

  • icons for CRUD are not enabling when on detached mode.
  • no status bar in dettached mode.
  • should we just have toolbars at the detached window level?
  • application should exit when main window is closed.
  • is it possible to dock windows like visual studio?
  • add a detach current window that just detaches that window.
  • disabled menu options are not properly greyed out. Done.

Add modified since for all entities   code

We added support for this only for images. We need to update all types such as currencies etc with support for point in time gets.

Add widget to manage assets   code

We can't upload flags etc.

Merged stories:

Add an image browser

At present we can't add or remove images.

  • update existing image browser to show tags and allow switching them on and off.

Missing session properties   code

  • disconnect type: orphaned, client disconnect.
  • add version, commit, etc.

Locking an account should log user out   code

At present if you lock an account the user will remain logged in.

Review of Wt   code

Problems:

  • no icons in website for the tab.
  • no flags in currencies in wt.
  • no way of knowing about reload (eventing) in wt. we should make the reload button change colour?
  • currency edit window in wt is too large, can't see bottom of screen.
  • cannot edit account in wt.
  • iso code field too small, numeric code field too small
  • adding new account crashes wt. saving new currency crashes wt.
[2026-Jan-07 12:16:08.080] 1050920 - [access] "wthttp: 127.0.0.1   POST /?wtd=kh0IzYWIDev5vqgQ HTTP/1.1 200 271"
[2026-Jan-07 12:16:08.080] 1050920 - [info] "WebRequest: took 0.969 ms"
2026-01-07 12:16:15.578713 [TRACE] [ores.iam.repository.account_mapper] Mapping db entity: {"id":"019b9856-215c-7fd4-b139-11821e39d80a","version":1,"username":"newuser3","password_hash":"$scrypt$ln=14,r=8,p=1$wcZS3a7UEPEz8RBhIuOWIA==$vFYxagrQLQ1FXzMhP1u1D4xMJF2G4HMMsmoOxpRsdwYjSChBIsGiuA92cD8dPODlSYoG9uiX6SohLcUaZNUUUg==","password_salt":"","totp_secret":"","email":"newuser3@example.com","modified_by":"bootstrap","valid_from":"2026-01-07 12:02:20","valid_to":"9999-12-31 23:59:59"}
2026-01-07 12:16:15.578787 [TRACE] [ores.iam.repository.account_mapper] Mapped db entity. Result: {"version":1,"id":"019b9856-215c-7fd4-b139-11821e39d80a","recorded_by":"bootstrap","username":"newuser3","password_hash":"$scrypt$ln=14,r=8,p=1$wcZS3a7UEPEz8RBhIuOWIA==$vFYxagrQLQ1FXzMhP1u1D4xMJF2G4HMMsmoOxpRsdwYjSChBIsGiuA92cD8dPODlSYoG9uiX6SohLcUaZNUUUg==","password_salt":"","totp_secret":"","email":"newuser3@example.com","recorded_at":"2026-01-07 12:02:20.000000000Z"}
2026-01-07 12:16:15.578866 [DEBUG] [ores.iam.repository.account_mapper] Mapped db entities.
2026-01-07 12:16:15.578895 [DEBUG] [ores.iam.repository.login_info_repository] Reading all login_info.
2026-01-07 12:16:15.579395 [DEBUG] [ores.iam.repository.login_info_repository] Read all login_info. Total: 1
2026-01-07 12:16:15.579436 [DEBUG] [ores.iam.repository.login_info_mapper] Mapping db entities. Total: 1
2026-01-07 12:16:15.579465 [TRACE] [ores.iam.repository.login_info_mapper] Mapping db entity: {"account_id":"019b9856-215c-7fd4-b139-11821e39d80a","last_ip":"127.0.0.1","last_attempt_ip":"127.0.0.1","failed_logins":0,"locked":0,"last_login":"2026-01-07 12:14:01","online":1,"password_reset_required":0}
2026-01-07 12:16:15.579519 [TRACE] [ores.iam.repository.login_info_mapper] Mapped db entity. Result: {"last_login":"2026-01-07 12:14:01.000000000Z","account_id":"019b9856-215c-7fd4-b139-11821e39d80a","failed_logins":0,"locked":false,"online":true,"password_reset_required":false,"last_ip":"127.0.0.1","last_attempt_ip":"127.0.0.1"}
2026-01-07 12:16:15.579585 [DEBUG] [ores.iam.repository.login_info_mapper] Mapped db entities.

Add a dashboard for users   code

As per screenshots:

user_dashboard_core_ui.png

dashboard_ideas.png

dashboard_ideas_II.png

Notes:

  • add command to shell to list sessions.

Add sign-up approval workflow   code

Should also handle invite codes, etc.

We did most of the work except invite codes.

Listen for events in details dialog   code

At present we are only listening for details in the main dialogs (accounts, currencies). It would be nice to be able to listen to events in the details dialogs. however, it needs to listen to events only for that specific entity (e.g. currency pair, account etc). Not sure we are setup for this.

Add a system clock   code

At present we are using "now". We should have a ores function for this so that we can time travel if required.

Implement entity history in shell   code

We need to add a command in shell to show entity history.

Notes:

  • for the history diff we could use a simple unified diff format.

Consider adding an "entity waterfall"   code

It would be nice to be able to see what entities have been added, deleted, modified, etc. This would work as a running commentary.

Notification of deletes   code

At present it is easy to see new rows or modified rows in an entity dialog. It is not possible to see deleted rows. One way to do this is to preserve state from before reload by key. Any entities which are not present after reload can be shown as red. History then allows users to re-instate deleted entities.

Add heat map of user sessions   code

Things to measure:

  • duration of sessions (once we have session table).
  • bytes sent/received per session (possibly 3-D plot?). Also good for anomaly detection.
  • mine github for ideas.

Add currencies update command to shell   code

At present we can only add new currencies. We also need to be able to update. Also, adding currencies requires supplying all parameters.

Subscribe on reconnect fails   code

Logs:

2025-12-13 01:14:35.108648 [DEBUG] [ores.comms.messaging.frame] Successfully deserialized frame subscribe_request (0x10)
2025-12-13 01:14:35.108680 [DEBUG] [ores.comms.net.connection] Successfully deserialized frame, type: subscribe_request (0x10) total size: 60
2025-12-13 01:14:35.108708 [DEBUG] [ores.comms.net.server_session] Received message type subscribe_request (0x10)
2025-12-13 01:14:35.108750 [DEBUG] [ores.comms.messaging.message_dispatcher] Dispatching message type subscribe_request (0x10)
2025-12-13 01:14:35.108811 [WARN] [ores.comms.service.auth_session_service] Authorization failed for subscribe_request (0x10) from 127.0.0.1:45498: not authenticated
2025-12-13 01:14:35.108842 [WARN] [ores.comms.messaging.message_dispatcher] Authorization denied for subscribe_request (0x10) from 127.0.0.1:45498
2025-12-13 01:14:35.108875 [ERROR] [ores.comms.net.server_session] Message dispatch failed: 10

Data in login info looks spurious   code

We see stuff like this:

oresdb=> select * from login_info;
              account_id              |     last_ip     | last_attempt_ip | failed_logins | locked |       last_login       | online
--------------------------------------+-----------------+-----------------+---------------+--------+------------------------+--------
 019a4439-be9e-798e-bf2f-927ca236f84c | 0.0.0.0         | 0.0.0.0         |             0 |      0 | 1969-12-31 23:00:00+01 |      0
 019a3ba6-bd11-709b-b93d-fea9403d3d39 | 127.0.0.1       | 127.0.0.1       |             0 |      0 | 2025-10-31 19:03:48+00 |      1
 019a4431-98f8-79ae-9fc8-f6a6e77a0490 | 192.168.1.100   | 192.168.1.100   |             0 |      0 | 2025-11-02 10:51:32+00 |      1
 019a4431-9a7d-7b3d-a1cb-b9e02d44c804 | 0.0.0.0         | 192.168.1.100   |             1 |      0 | 1969-12-31 23:00:00+01 |      0

Also, we need a login timestamp and a logout timestamp so we can measure session duration.

Currencies displays when not connected   code

At present we can display currencies even before we connect. This is probably ok but we should at least state we are not connected. Alternatively it should be disabled.

Add a delete all button which deletes all currencies   code

It is useful especially in test environments to be able to delete all entities before a re-import.

Notes:

  • purge button.

Implement four-eyes authorisation framework   code

Add a generic authorisation queue that supports four-eyes (maker-checker) workflows. Sensitive operations in production tenants (counterparty onboarding, party modifications) require a second authoriser to approve. The framework should be entity-agnostic: any operation that produces a pending change can be routed through the queue.

  • Acceptance Criteria
    • Pending changes stored with maker identity and timestamp.
    • Checker can approve or reject pending changes.
    • Approved changes applied atomically.
    • Rejected changes logged with reason.
    • Framework is reusable across entity types.

Add KYC workflow for counterparty onboarding   code

In production tenants, counterparty creation requires a KYC (Know Your Customer) process with supporting documentation. Each operational party manages their own counterparties independently. The workflow should integrate with the four-eyes authorisation framework.

  • Acceptance Criteria
    • Counterparty creation in production tenants goes through KYC workflow.
    • Supporting documentation can be attached.
    • Four-eyes approval required before counterparty is active.
    • Each party's counterparties are isolated from other parties.

Lock down data librarian for production tenants   code

The data librarian currently allows unrestricted dataset publication. In production tenants, restrict operations to safe, auditable actions. Bulk imports, GLEIF-based party creation, and other evaluation-only features should be gated by the tenant type.

  • Acceptance Criteria
    • Librarian checks tenant type before allowing operations.
    • Bulk counterparty import disabled in production tenants.
    • GLEIF-based party creation disabled in production tenants.
    • Audit trail for all librarian operations.

Add self-registration and approval for production tenants   code

Allow tenant administrators to self-register. A super admin reviews and approves the registration via the authorisation queue framework. On approval, the tenant is created with a system party and the registrant becomes the tenant admin.

  • Acceptance Criteria
    • Self-registration form captures tenant details.
    • Registration enters authorisation queue for super admin approval.
    • Approved registration creates tenant, system party, and admin account.
    • Rejected registration logged with reason.

Modes of operation   code

We need to discuss this with Lau. Intuitively it seems that different users have different "views" on the functions of the system. If you are operations you probably want a specific set of screens and icons on the toolbar to go with it. Trading will have a different set. The administrator of the system also, and so on. There should be a way to create "profiles" which change the toolbar. These are distinct from user profiles which remember which screens to open and their positions.

Make bootstrap operation atomic using SQL transactions   code

The bootstrap operation is currently not atomic. If an exception is thrown after the admin account is created but before the bootstrap mode feature flag is disabled, the system is left in an inconsistent state where:

  • An admin account exists in the database.
  • Bootstrap mode is still enabled (allowing creation of additional admin accounts).
  • Current Flow

    The bootstrap process in bootstrap_service.cpp performs these operations:

    1. Create account via account_service.
    2. Create logininfo via login_info_service.
    3. Assign admin role via authorization_service.
    4. Disable bootstrap mode via system_flags_service.

    Each operation auto-commits independently, creating windows for partial failures.

  • Solution

    Use sqlgen's Transaction class to wrap all bootstrap operations in a single database transaction. The class provides RAII semantics with automatic rollback on destructor if commit() was not called.

    // sqlgen's Transaction class pattern:
    ~Transaction() {
        if (!transaction_ended_) {
            rollback();  // Auto-rollback on exception
        }
    }
    
  • Challenges
    • Current architecture creates separate database::context connections per service.
    • Need to pass a shared connection/transaction across services.
    • May require refactoring services to accept an optional transaction parameter.
  • Mitigation

    The existing self-healing mechanism in bootstrap_mode_service::initialize_bootstrap_state() detects when bootstrap mode was not properly disabled after account creation and auto-corrects the state on server startup. This provides defence in depth but is not a substitute for proper transactional integrity.

  • Acceptance Criteria
    • Bootstrap operation completes fully or rolls back entirely.
    • No partial state if any step fails.
    • Self-healing mechanism remains as defence in depth.

Sign-up dialog should be disabled if feature is disabled   code

At present you can click on sign up and fill in all details but then you get a server error:

2025-12-24 10:46:48.212905 [DEBUG] [ores.iam.messaging.accounts_message_handler] Request: signup_request{username=newuser4, email=newuser4@example.com}
2025-12-24 10:46:48.213049 [INFO] [ores.iam.service.signup_service] Signup attempt for username: newuser4, email: newuser4@example.com
2025-12-24 10:46:48.213144 [WARN] [ores.iam.service.signup_service] Signup rejected: signups are disabled
2025-12-24 10:46:48.213186 [WARN] [ores.iam.messaging.accounts_message_handler] Signup failed for username: newuser4, reason: User registration is currently disabled
2025-12-24 10:46:48.213898 [DEBUG] [ores.comms.messaging.compression] Compressed 68 bytes to 60 bytes using zlib (0x)

Makes more sense for sign-up button to also be disabled in client. Trouble is, we do not know as we are not connected. This means the handshake has to provide some kind of flag saying sign-up is enabled or disabled.

User interface options   code

This story keeps track of configuration points in the UI which should be added at some point:

  • display time in human readable form - e.g. last month, 3 months ago, etc.
  • auto-reload. If a dialog receives new data, reload it automatically. Defaults to off.

Notes:

  • we need a UI to manage the UI options.
  • should be stored in the server and cached locally with a sqlite db.

New feature flags   code

  • allow sign-ups. If true, users can create their own accounts. Done.
  • auto-authorise sign-ups. If true, the account is automatically created. If false, admin user has to authorise it.

Notes:

  • we need a UI to manage the UI options.

Perform tests to check database connectivity   code

We added some basic database monitoring but did not perform a lot of testing.

We should also generalise this so that if the database goes down after start we still perform some kind of retry logic.

See also this point from Gemini:

The new broadcastall method appears to be unused in this pull request. The database status broadcast is implemented directly in ores::comms::net::server::broadcast_database_status without using the subscriptionmanager. If this method is not intended for future use, it could be removed to avoid dead code.

Add sound assets   code

We need to be able to associate sounds to certain events such as disconnect, connect etc. Users need to be able to choose their own sounds.

Events:

  • new items (reload). Ideally entity specific option.
  • connect, disconnect.

Notes:

  • one possibility is to use llama-tts to generate voice:
./llama-tts --tts-oute-default -p "You have new currencies." && aplay output.wav

We could generate sounds up front for main events. This could be the speech sound theme.

Links:

  • GH: peon-ping: "AI coding agents don't notify you when they finish or need permission. You tab away, lose focus, and waste 15 minutes getting back into flow. peon-ping fixes this with voice lines from Warcraft, StarCraft, Portal, Zelda, and more — works with Claude Code, Codex, Cursor, OpenCode, Kilo CLI, Kiro, Windsurf, Google Antigravity, and any MCP client."
  • VibeVoice: A Frontier Open-Source Text-to-Speech Model: "VibeVoice-Realtime is a lightweight real‑time text-to-speech model supporting streaming text input and robust long-form speech generation. It can be used to build realtime TTS services, narrate live data streams, and let different LLMs start speaking from their very first tokens (plug in your preferred model) long before a full answer is generated. It produces initial audible speech in ~300 ms (hardware dependent)."
  • GH: KittenTTS: "KittenTTS is a compact, fast, and friendly text-to-speech model. It delivers natural voice output at a tiny footprint, designed to run on mobile devices, edge devices, and lightweight servers. It fits under 25 MB and focuses on clear, lifelike speech without sacrificing performance. This project aims to give developers a reliable TTS option that’s easy to deploy, easy to run, and easy to extend."
  • GH: KittenTTS: "Kitten TTS is an open-source realistic text-to-speech model with just 15 million parameters, designed for lightweight deployment and high-quality voice synthesis."

Cli clean-ups   code

  • we still seem to support --entity currencies. This should now be invalid. Done.
  • we are still exporting as JSON. We should instead allow CSV and XML exports only.
  • add recipes for all commands.
  • should be able to list all admin accounts.
  • list command should support table output.
  • is admin should be a bool: --is-admin arg (=0)
  • add account duplicates logic. We should have a single service for this.
  • split application into entities.

Add search to currencies   code

It should be possible to filter the open currencies by a string. This should be any field. The user needs to know when the list has been filtered. Ideally we should have buttons at the top per field and filter using those. It should go back to database rather than just filter what is available in UI.

Add a generic tagging system   code

It should be possible to associate any entity with one or more tags and then do a search by tags on that entity type. For example, tag a set of currencies as emerging markets and then search by that.

Merged stories:

Currencies should have tags

Examples:

  • metals, emerging markets
  • continent as a tag
  • crypto

At present we are overloading the currency type field.

  • Analysis Labels versus tags
    That is an **excellent question** that gets to the heart of cloud governance and
    system design. You are correct that the underlying structure is fundamentally
    the same—a **key-value pair attached to a resource**—but the distinction is
    necessary because of the **context, governance, and enforcement mechanisms**
    applied to each one.
    
    The distinction between GCP Labels and Tags boils down to **Annotation vs.
    Policy**.
    
    ---
    
    ## ⚖️ Why the Distinction is Necessary
    
    | Feature              | GCP **Labels** (Annotation)                                                                          | GCP **Tags** (Policy/Governance)                                                                                                                                               |
    |:---------------------|:-----------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
    | **Primary Goal**     | **Organization, filtering, cost allocation, and reporting.**                                         | **Policy enforcement, access control, and hierarchical governance.**                                                                                                           |
    | **Structure**        | **Flexible/Arbitrary.** Key-Value pair set directly on the resource.                                 | **Structured/Centralized.** Key and Value are defined *centrally* as separate resources (at the Organization/Project level) and then *bound* to the resource.                  |
    | **Hierarchy**        | **Flat.** A resource does not inherit labels from its parent folder.                                 | **Hierarchical.** Tags are inherited by child resources (like a VM inheriting a tag from its containing project).                                                              |
    | **Access Control**   | **Low Control.** Any user with permissions to modify the resource can typically modify its labels.   | **High Control.** Dedicated **IAM Roles** are required to create, manage, and even attach Tag Keys and Values. This prevents users from "tagging" their way out of a policy.   |
    | **Enforcement**      | **None.** Labels are metadata and cannot be used directly in IAM conditions to grant or deny access. | **High.** Tags are the *only* mechanism designed for **conditional IAM policies** and Organization Policies (e.g., "deny deletion unless resource has tag `environment:dev`"). |
    | **Use Case Example** | Filtering the console: "Show me all resources labeled `owner:john_doe`."                             | Security: "Only allow network traffic from resources tagged `security_zone:trusted`."                                                                                          |
    
    ### Analogy: The Whiteboard vs. The Building Code
    
    1.  **Labels are like a sticky note on a whiteboard:**
        * They are **flexible**, quickly written, and easy to change.
        * They are used for **informal organization** (cost center, app name, etc.).
        * Changing the note has **no impact on the resource's function** or security.
    
    2.  **Tags are like a building code requirement:**
        * They are **highly structured** and defined by a central authority (the city/organization).
        * They are used to **enforce policy** (safety, compliance).
        * If a resource (a building) doesn't have the correct tag (e.g., a **fire-rating tag**), an automated system (the policy engine) will **deny an action** (e.g., deny occupation or modification).
    
    ---
    
    ## How This Relates to Your SVG System
    
    Your current **three-table model** is structurally a **GCP Label** system—it's
    excellent for annotation and filtering (e.g., "Give me all icons with the tag
    'currency'").
    
    If you wanted to implement a **GCP Tag** concept, you would need to:
    
    1.  **Formalize the Tags:** Create certain tags as *official resources* (like
        `Status` with official values `Draft`, `Approved`, `Production`).
    2.  **Add Permissions:** Implement **application logic** that checks a user's
        role against the tag. For example: "A user needs the `Approver` role to
        apply the `Status:Production` tag to an icon."
    
    The need to distinguish them comes down to providing one system that is
    **flexible for organization (Labels)** and another that is **rigid for security
    and compliance (Tags)**.
    

    Examples:

    • majors, minors, EM, latam, etc are tags we can have on currencies. On currency pairs, We could also have USD crosses, which is inferred by having USD as one of the pair. However these are not labels because we do not want everyone to start using them. When defining a report, you could associate currencies and currency pairs to the report via the tags.
    • labels are things we could associate with images for example. Flags could have both "countries" and "currencies" labels so that when we want to select a flag as the image for a currency, we could ask for all images labelled "currencies". The labelling can be done by pretty much anyone.

Add "uptime" screen   code

See the claude page for ideas:

anthropic_uptime_page.jpeg

Mine dexter for ideas   code

Dexter is an autonomous financial research agent that thinks, plans, and learns as it works. It performs analysis using task planning, self-reflection, and real-time market data. Think Claude Code, but built specifically for financial research.

Links:

Automated SQL generation   code

At present we are manually creating the SQL. We should be able to do it from sqlgen.

Missing shell functionality   code

The job of the client is to exercise the entirety of the comms protocol, and to provide a way to perform CRUD operations via command line. Note:

  • client never access the repositories directly, it should do exactly the same as the qt client would do.
  • client's only interface is the REPL.
  • client will eventually be used by AI agents.

Client needs to have messages at the entity level:

  • currencies:
    • import currencies from ORE Format. Given a path in the filesystem, it performs the import using the importer and appropriate comms messages.
    • export currencies to ORE format. Uses comms to obtain the currencies, then the exporter to convert, then dumps them into the terminal as ORE XML.
    • list existing currencies as a table or as JSON. We need support for latest, "at time point" and "all" (meaning every single version). It should be possible to supply some filtering by the entity's ID (e.g. --iso-code).
    • delete one or more currencies. User supplies a number of entity IDs (e.g. --iso-code ABC --iso-code CDE --iso-code FGH and so on).
    • add a currency supplying arguments (e.g. --iso-code and so forth).
    • add currencies from JSON.
  • accounts:
    • list existing currencies as a table or as JSON. We need support for latest, "at time point" and "all" (meaning every single version). It should be possible to supply filtering by the entity's ID.
    • delete one or more accounts. User supplies a list of entity IDs.
    • add an account supplying arguments.
    • add accounts from JSON.
  • feature flags:
    • list existing feature flags as a table or as JSON. We need support for latest, "at time point" and "all" (meaning every single version). It should be possible to supply some filtering by the entity's ID
    • delete one or more feature flags. User supplies a list of entity IDs.
    • add feature flags supplying arguments.
    • add feature flags from JSON.

Notes:

  • update recipes with the new client commands.

Invalid password should not throw   code

At present in the unlock test we have:

BOOST_LOG_SEV(lg, info) << "Locking account by failing 5 login attempts";
auto ip = internet::ipv4();
for (int i = 0; i < 5; ++i) {
    try {
        sut.login(account.username, "wrong_password", ip);
    } catch (...) {}
}

This is very suspicious; a failed login should just return false or the modern c++ equivalent (std::expected?).

Faker with seeds   code

As suggested by phi4:

Faker Usage:

Randomness: Ensure that the use of faker data is appropriate for testing. Consider seeding the random generator for reproducibility in tests.

Notes:

#include "faker-cxx/generator.h" void setSeed(std::mt1993764::resulttype seed) Catch::rngSeed()

Improve handling of error responses   code

As per Gemini code review:

Certainly. Point #2 from the review of `CurrencyHistoryDialog.cpp` addressed
the potential complexity of error checking by suggesting that relying on the
specific response message type is **fragile**.

The goal is to move from:

1.  Client sends **Request A**.
2.  Server returns **Response A** (Success) OR **Error Response** (Failure) OR
    **Response B** (Unexpected success type).
3.  Client checks: *Is the message type exactly **Response A**?*

to a more robust pattern where the client checks for a generic failure response
first.

-----

## 🐞 Fragile Error Check (Current Code)

The current code in `loadHistory` checks for success by expecting *only* the
specific success message type:

```cpp
// Current Fragile Logic
if (result->header().type != comms::protocol::message_type::get_currency_history_response) {
    onHistoryLoadError(QString("Server does not support currency history (received message type %1)")
        .arg(static_cast<int>(result->header().type)));
    return;
}
```

This logic has two main problems:

1.  **Hiding Server Errors:** If the server returns a generic protocol error
    (`message_type::error_response`) because, for example, the client's session
    timed out, the client logs a misleading message: "Server does not support
    currency history." It should be reporting the actual error message sent by
    the server.
2.  **Lack of Standardization:** Every client method needs to implement its own
    logic to handle unexpected types.

-----

## 🛠️ Suggested Improvement: Standardized Error Handling

The improvement is to check for a generic **`error_response`** message type
first, and report its payload/message, before attempting to deserialize the
successful response.

Assuming your system has a standard `error_response` message:

```cpp
void CurrencyHistoryDialog::handleHistoryResponse(const HistoryResult& result) {
    if (!result) {
        onHistoryLoadError(QString::fromStdString(result.error()));
        return;
    }

    // 1. Check for a generic server-side error response
    if (result->header().type == comms::protocol::message_type::error_response) {
        // Assume error_response contains a readable message payload
        auto error_response = risk::messaging::error_response::deserialize(result->payload());
        if (error_response) {
            onHistoryLoadError(QString::fromStdString(error_response->message));
        } else {
            onHistoryLoadError("Server returned a malformed error response.");
        }
        return;
    }

    // 2. Check for the specific SUCCESS response type
    if (result->header().type == comms::protocol::message_type::get_currency_history_response) {
        auto response = risk::messaging::get_currency_history_response::deserialize(result->payload());

        if (!response || !response->success) {
            // Handle success=false within the expected response type
            onHistoryLoadError(QString::fromStdString(response ? response->message : "Invalid or failed history response."));
            return;
        }

        history_ = std::move(response->history);
        onHistoryLoaded();
        return;
    }

    // 3. Handle truly unexpected message type
    onHistoryLoadError(QString("Received unexpected message type %1 from server.")
        .arg(static_cast<int>(result->header().type)));
}
```

By standardizing the **`error_response`** type, the client can always extract
and display the relevant server-side failure reason, leading to much clearer
logging and user feedback.

Do not disable cert verification   code

At present we've hacked cert verify to false. We should not do this. Gemini:

You cannot use Let's Encrypt for development purposes because it requires a public domain name that Let's Encrypt can verify, and your local development server is not publicly accessible. The recommended approach is to create a local certificate authority (CA) using a tool like mkcert to sign your certificates, which allows you to bypass browser warnings for local domains like localhost.

Option 1: Use mkcert for local development

  • Install mkcert: Follow the instructions to install the mkcert tool on your system.
  • Install the local CA: Run the command to install the local CA root certificate into your system's trust store. This is a one-time setup.
  • Generate a local certificate: Use mkcert to generate certificates for your local development domains (e.g., localhost, my-app.local). The generated certificates will be signed by your trusted local CA and will not cause browser warnings.

Option 2: Use Let's Encrypt with a real public domain

Purchase a domain: Buy a public domain name (e.g., mydomain.dev).

  • Use an ACME client: Use an ACME client like Certbot to automate the certificate process. You can run certbot on a server that is accessible to the public internet.
  • Complete the validation: The client will need to verify your ownership of the domain through a DNS or HTTP challenge, which requires the domain to be publicly reachable.
  • Deploy the certificate: Let's Encrypt will issue a certificate that you can then deploy to your development server.

Why Let's Encrypt doesn't work for local development

  • Let's Encrypt's primary purpose is to secure public-facing websites by automatically verifying domain ownership.
  • They use ACME protocol challenges (DNS or HTTP) that require the public internet to access your server at the specified domain.
  • Since your local server is not on the public internet, it cannot respond to these challenges, and Let's Encrypt cannot verify your ownership of the domain.

Add [[nodiscard]] to repository operations returning data   code

At present we can create an account and ignore the result, etc. We should be forced to look at the result.

CLI Importing needs to read from database   code

After we do the import into the database, we need to read the currencies again to get the valid from/to.

Improve error message when server is not running   code

At present we get:

Failed to connect to server: Failed to connect to server

If we try again after the error, "authenticating…" shows up in red.

Consider adding log command line options to qt   code

At present we have hard-coded logging options. However, maybe users should be able to change the logging settings from the UI rather than having to restart the app and supply command line options.

Troubleshoot skills in claude   infra

Add entity related charts   code

Have a think on things that could benefit from a graphical display at the entity level. Some ideas:

  • entity history: number of additions, edits, deletions over time. Bar chart. Makes it easier to pick up weird system problems. This is overall across all instances.
  • single entity history. In the history tab for that entity, graph showing the changes to the entity over time.

In addition, we need to be able to support charts in ASCII so that we can see them in the REPL. This will be used by the AI agents.

Links:

Ensure applications work under OSX using github images   code

We need to install the DMG and run the app.

We think the package does not have all of the dependencies, but this did not work:

if(APPLE)
    install(CODE "
        # make sure the bundle is already on disk
        set(BU_CHMOD_BUNDLE_ITEMS TRUE)
        include(BundleUtilities)

        # full path to the real executable inside the bundle
        set(app_exe \"\${CMAKE_INSTALL_PREFIX}/OreStudio.app/Contents/MacOS/OreStudio\")

        # directory where 3rd-party libraries will be copied
        set(libs_dir \"\${CMAKE_INSTALL_PREFIX}/OreStudio.app/Contents/Frameworks\")

        # discover all prerequisites and copy/fix them
        fixup_bundle(\"\${app_exe}\" \"\" \"\${libs_dir}\")
    " COMPONENT Runtime)
endif()

We should ask someone with an OSX machine to test this.

Links:

Make UI/UX look more professional   code

Links:

Analysis from Gemini:

To make your Qt application look like an advanced trading system—even for a
simple CRUD operation screen—requires moving beyond basic desktop application
styles toward the dense, high-contrast, data-rich aesthetic of platforms like
Bloomberg or professional broker terminals.

The current look (a dark table with simple data rows) is a great foundation.
Here are targeted suggestions for your Qt UI to elevate it to a professional,
advanced trading system aesthetic:

## I. Color and Contrast (The Dark Theme)

Your current dark gray background is good, but trading systems use specific
contrast to highlight data significance.

1.  **Define a Palette:** Adopt a limited, functional color palette:

      * **Primary Background:** A very dark, near-black charcoal (`#1A1A1A` or
        `#1E1E1E`). This is the foundation.
      * **Foreground/Text:** Clean white or light gray (`#F0F0F0`).
      * **Accent Color (Neutral):** A subdued corporate blue/cyan (`#007ACC` or
        a subtle green/gold from your branding) for selected rows, borders, and
        input focus.
      * **Data Status Colors:**
          * **Positive (Gains):** Bright, high-contrast green (`#00C853`).
          * **Negative (Losses):** High-contrast red (`#FF3333`).
          * *(While CRUD isn't about gains/losses, you can use these colors for
            status, like 'Active' vs. 'Inactive' currencies).*

2.  **Subtle Depth:** Avoid flat black. Use a slightly lighter shade of gray
    (`#2A2A2A`) for embedded panels, sidebars, and control areas to create
    visual segmentation, giving the impression of modularity.

## II. Typography and Data Presentation

Trading UIs prioritize density and scannability.

1.  **Monospace Font:** For the currency codes and numerical columns, switch to
    a clear, legible **monospace font** (like Consolas, Fira Code, or a custom
    font in Qt). Monospace fonts ensure that all numbers align perfectly in
    columns, which is essential for rapid data comparison.
2.  **Font Sizing:** Use a small, consistent font size (e.g., 10pt or 11pt) to
    fit more data on screen. Use bolding sparingly, primarily for the Currency
    Code (e.g., USD, EUR).
3.  **Visual Alignment:**
      * **Text Columns** (Currency Name): Left-aligned.
      * **Code Columns** (ISO Code, Symbol): Center-aligned.
      * **Numerical Columns** (Rounding, Precision): Right-aligned. **This is
        critical** for financial UIs, as it allows users to compare magnitude
        instantly.

## III. Advanced Table View Enhancements (QTableView/QTableWidget)

Since the core of this screen is a table, focus on making the table look
high-tech.

1.  **Header Styling:**

      * Make column headers slightly stand out with a subtle dark gradient or a
        distinct, slightly brighter background color (e.g., `#282828`).
      * Ensure the header font is crisp (perhaps slightly bolder than the row
        data).
      * Add tiny, clear **sort indicators** to show the current sort direction.

2.  **Row Selection:**

      * The selected row should use your accent color (e.g., a thin blue left
        border or a light blue background fill) with white or light gray text
        for high visibility.
      * Introduce **subtle, faint horizontal rules** (1px in a color like
        `#333333`) to separate rows, which aids readability in dense tables.

3.  **Interactive Elements (Hover):** Implement a very subtle change on row
    hover (e.g., the background darkens by 5%) to indicate interactivity, even
    if clicking doesn't change the view.

## IV. UI Layout and Modularity

Advanced UIs are rarely monolithic; they are built from modular panels.

1.  **Toolbar (Top):** Create a clean, dedicated toolbar area at the top for
    your CRUD operations (`Add`, `Edit`, `Delete`) and your system icons
    (Connection, Reference Data).

      * Use the Fluent UI System Icons you researched (`globe-32-regular`,
        `money-32-regular`).
      * Buttons should be flat, high-contrast text or icons only. On hover, they
        should reveal a subtle gray background or a thin accent border.

2.  **Side Panel (Right or Left):** Instead of a simple dialog for *editing*,
    use a dedicated side panel that slides out or appears next to the table when
    a row is selected.

      * This panel would house the detail view for the selected currency. This
        makes the UI feel like a single workspace rather than navigating modal
        windows.
      * Give this panel a slightly different background shade (`#2A2A2A`).

3.  **Status Bar (Bottom):** Add a sleek, minimal status bar at the bottom. This
    is where you would place your connection status icon (using your proposed
    `globe-32-regular` or `plug-connected-32-regular` icon). It reinforces the
    "system is live" feel.

## V. Qt-Specific Implementation via Stylesheets

In Qt, you achieve this professional look primarily through **QSS (Qt Style
Sheets)**. You will be targeting specific widgets (like `QTableView`,
`QPushButton`, `QLineEdit`) with CSS-like rules.

```css
/* Example QSS Snippets for the Trading Look */

QTableView {
    /* Base style for the data area */
    background-color: #1A1A1A; /* Primary Background */
    gridline-color: #333333; /* Faint row separators */
    color: #F0F0F0;
    border: 1px solid #007ACC; /* Subtle border using accent color */
    selection-background-color: #007ACC; /* Accent Color for selection */
    selection-color: white;
}

QHeaderView::section {
    /* Style for Column Headers */
    background-color: #282828; /* Slightly lighter shade for headers */
    color: #FFFFFF;
    border: none;
    padding: 6px;
    font-size: 11pt;
    font-weight: bold;
}

QLineEdit {
    /* Style for input fields (e.g., in the side panel or search) */
    background-color: #1A1A1A;
    border: 1px solid #444444;
    color: #F0F0F0;
    padding: 5px;
}

QPushButton {
    /* Flat button style for toolbar */
    background-color: transparent;
    border: none;
    color: #F0F0F0;
    padding: 8px 12px;
}

QPushButton:hover {
    /* Hover effect */
    background-color: #2A2A2A;
    border: 1px solid #007ACC;
}
```

Current issues with package install in debian   code

At present we have several issues with packaging:

  • debs are built on ubuntu so they cannot install on latest debian testing as the t64 migration has been completed. Error:
root@lovelace:~# apt install /home/marco/Downloads/orestudio_0.0.4_amd64.deb
You might want to run 'apt --fix-broken install' to correct these.
Unsatisfied dependencies:
 orestudio : Depends: libqt6gui6t64 (>= 6.1.2) but it is not installable
             Depends: libqt6widgets6t64 (>= 6.1.2) but it is not installable
Error: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).
root@lovelace:~# apt install libqt6widgets6t64 libqt6gui6t64
Package libqt6gui6t64 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
  libqt6gui6:i386  libqt6gui6


Package libqt6widgets6t64 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
  libqt6widgets6:i386  libqt6widgets6


Error: Package 'libqt6widgets6t64' has no installation candidate
Error: Package 'libqt6gui6t64' has no installation candidate
  • when starting the package from a directory, it tries to create the log relative to that directory:
[marco@lovelace ~]$ /opt/OreStudio/0.0.4/bin/ores.qt
terminate called after throwing an instance of 'boost::filesystem::filesystem_error'
  what():  boost::filesystem::create_directories: Permission denied [system:13]: "/home/marco/../log", "/home/marco/../log"
Aborted                    /opt/OreStudio/0.0.4/bin/ores.qt

When investigating a crash we noticed the app is not exiting cleanly. Fix all of these crashes before we go any further.

Add skill to improve UI design   code

Create basic manual   code

We should start populating the manual from the start. Ensure there is a claude skill that updates the manual as we add new entities. Create a PDF build for it. We can use templates from thesis. Link it to site so that yuo can browse it.

Do not return sensitive fields   code

As per code review:

Major Security Flaw (listaccountsresponse): The serialization format explicitly includes highly sensitive fields (passwordhash, passwordsalt, totpsecret) for every domain::account returned. This is a severe security risk. These fields must not be exposed over any messaging protocol, even internal ones, unless it is a dedicated, highly protected replication channel.

Proposed Change: Create a protocol-safe subset structure, e.g., domain::accountsummary, which contains only non-sensitive data (id, username, email, isadmin, version, modifiedby). Update listaccountsresponse to use std::vector<domain::accountsummary>.

Change listaccountsresponse::accounts to use a non-sensitive type like std::vector<domain::accountsummary>.

Local versus remote connectivity   code

In some cases it makes sense to use the UI directly against the database - for example, users may just want to create a simple setup to play around and the complexity of having to have a server is not helpful. In addition, they may not even require a full blown postgres - sqlite is enough. It should be possible to setup a connection to a database in this manner. Ideally the client should support both at the same time.

Allow in place editing of currencies   code

For expediency users should be able to edit multiple currencies and save them in one go directly in the grid. Shift enter to edit a field seems interesting.

Users need to be made aware when there are changes in CRUD objects, and there needs to be a reload button to reload data from the database in that case. This could be done by making the reload button a different colour when data is available.

Make the reload button blue when there are changes.

Notes:

  • history windows should also have the notification.

Light system theme results in mix of dark and light   code

We need to ensure we can have both light and dark themes and they work consistently.

Setup Qt properly on Windows   code

Seems like our package is still missing Qt DLLs. Use the windows tool to determine what is missing. Consider installing Qt in the build machine via the built packages.

Add SC script on windows to install ores.service   code

We should package a trivial batch file that sets up the service on windows. We should also consider adding a simple script to setup the service on Linux. Ideally it should be done via the packaging step.

Package SQL scripts   code

We need to initialise the database on installation. The package should contain all of the necessary SQL as well as a script to do it.

Or maybe we should do everything from within sqlgen and generate the scripts for the cases where we need manual work in database.

Allow supplying the postgres connection URI   code

At present we are supplying all of the components for the connection. It may be easier to allow the entire URI:

  • connection-uri: The connection URI to connect to PostgreSQL. Example: postgresql://postgres:postgres@localhost/postgres.

Consider adding MCP support   code

From Kaggle course:

Model Context Protocol (MCP) is an open standard that lets agents use community-built integrations. Instead of writing your own integrations and API clients, just connect to an existing MCP server.

MCP enables agents to:

  • Access live, external data from databases, APIs, and services without custom integration code
  • Leverage community-built tools with standardized interfaces
  • Scale capabilities by connecting to multiple specialized servers

2.1: How MCP Works

MCP connects your agent (the client) to external MCP servers that provide tools:

  • MCP Server: Provides specific tools (like image generation, database access)
  • MCP Client: Your agent that uses those tools

All servers work the same way - standardized interface Architecture:

┌──────────────────┐ │ Your Agent │ │ (MCP Client) │ └────────┬─────────┘ │ │ Standard MCP Protocol │ ┌────┴────┬────────┬────────┐ │ │ │ │ ▼ ▼ ▼ ▼ ┌────────┐ ┌─────┐ ┌──────┐ ┌─────┐ │ GitHub │ │Slack│ │ Maps │ │ … │ │ Server │ │ MCP │ │ MCP │ │ │ └────────┘ └─────┘ └──────┘ └─────┘

Links:

Log file location for Qt application is non-standard on Linux   code

At present we need to manually create the log directory for the gui to fix this error:

[marco@lovelace bin]$ ./ores.qt terminate called after throwing an instance of 'boost::filesystem::filesystemerror' what(): boost::filesystem::createdirectories: Permission denied [system:13]: "opt/OreStudio/0.0.3/bin../log", "opt/OreStudio/0.0.3/bin../log" Aborted ./ores.qt

We should really output the log file in a standard location such as /var/log or something.

Create shared object interfaces   code

At present we are building shared objects / DLLs for the ores components, but we did not bother defining proper interfaces, exporting symbols etc. This causes problems on windows:

LINK : fatal error LNK1104: cannot open file 'projects\ores.utility\ores.utility.lib'

This is happening because we are not exporting explicitly any symbols. To fix this we did a hack:

if(WIN32 AND MSVC)
    # Export all symbols on windows for now. Bit of a hack.
    set(CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS ON)
endif()

The right solution for this is to annotate all the public types of each SO correctly, exporting symbols for all platforms:

Deep seek analysis:

Yes, Boost provides a cross-platform wrapper for exporting symbols using the
`BOOST_SYMBOL_EXPORT` macro from the **Boost.DLL** library. This macro abstracts
away the compiler-specific keywords required for different platforms.

### 🗂️ Boost's Cross-Platform Symbol Exporting

To export a symbol, you use the `BOOST_SYMBOL_EXPORT` macro in your code. Under
the hood, it expands to the correct compiler-specific attribute:

- On **Windows** with MSVC, it becomes `__declspec(dllexport)`
- On **macOS** and **Linux** with GCC/Clang, it becomes `__attribute__((visibility("default")))`

Here is a basic example of how to use it to export a global variable:

```cpp
#include <boost/config.hpp> // For BOOST_SYMBOL_EXPORT

class my_plugin_api {
    // Your interface definition
};

namespace my_namespace {
    class my_plugin_sum : public my_plugin_api {
        // Implementation
    };

    // Export the 'plugin' variable
    extern "C" BOOST_SYMBOL_EXPORT my_plugin_sum plugin;
    my_plugin_sum plugin;
}
```
*Note: The `extern "C"` is used here to prevent C++ name mangling, making the symbol name
predictable for tools that use C linkage. This is often crucial for a library's public API.*

For exporting factory functions, Boost offers the `BOOST_DLL_ALIAS` macro, which
is often more convenient:

```cpp
#include <boost/dll/alias.hpp> // For BOOST_DLL_ALIAS

namespace my_namespace {
    class my_plugin_aggregator : public my_plugin_api {
        // Implementation
    };

    // Factory function
    boost::shared_ptr<my_plugin_api> create() {
        return boost::shared_ptr<my_plugin_aggregator>(new my_plugin_aggregator());
    }

    // Export the factory function with the alias "create_plugin"
    BOOST_DLL_ALIAS(my_namespace::create, create_plugin)
}
```

### 💡 A Complementary Approach: Controlling Visibility

While Boost's macro solves the declaration problem, for finer control and to
minimize your shared library's public API, combine it with compiler flags that
hide all symbols by default.

- **On Linux and other ELF platforms**, use the `-fvisibility=hidden` flag. You
  can then use a **linker version script** to explicitly list the symbols you
  want to export.
- **On macOS**, use the `-fvisibility=hidden` flag and an **exported symbols
  list** with `-exported_symbols_list` during linking.
- **On Windows**, symbol visibility is typically controlled explicitly via
  `__declspec(dllexport)` or a module definition (.def) file, which
  `BOOST_SYMBOL_EXPORT` already handles.

Setting default visibility to hidden helps create a cleaner, more efficient
library by reducing its footprint, improving load times, and avoiding potential
symbol conflicts.

### 🔧 Summary

For a complete cross-platform solution:

1. **Use Boost.DLL macros**: Incorporate `BOOST_SYMBOL_EXPORT` or
   `BOOST_DLL_ALIAS` in your code to handle platform-specific export keywords.
2. **Hide symbols by default**: Compile your shared library with
   `-fvisibility=hidden` on Linux and macOS. This works in conjunction with the
   Boost macros.
3. **Use version scripts (optional)**: For maximum control on ELF platforms
   (Linux) or via an exported symbols list on macOS, use these linker features
   to define a precise public API.

I hope this helps you build your cross-platform shared library! If you have more
questions about using the Boost.DLL library for loading these symbols at
runtime, feel free to ask.

Links:

Update timestamps to use timezone   code

Add trace context propagation to messaging protocol   code

To enable distributed tracing across client and server, trace context needs to be propagated in the messaging protocol.

Tasks:

  • Extend frame header to include optional traceid and spanid fields.
  • Update frame serialization/deserialization.
  • Client includes trace context in outgoing request frames.
  • Server extracts trace context and creates child spans linked to client's trace.

Starting UI from file manager does not work   infra

At present we can't start the Qt UI because the file manager thinks its a video. Maybe we need a desktop file.

Example desktop file:

[Desktop Entry]
Comment=
Terminal=true
Name=fixvideo
Exec=/home/user/fixvideo.sh %f
Type=Application
Icon=/usr/share/icons/gnome/48x48/apps/gnome-settings-theme.png
Encoding=UTF-8
Hidden=false
NoDisplay=false
Categories=AudioVideo;Player;Recorder;
MimeType=video/dv;v

Source: Is there a standard mode for .ini files?

Tasks:

  • create a desktop file for the application.
  • add an icon.

Use string views for static strings   infra

We are creating std::strings where we don't need them, use string views instead.

This is not trivial, when we tried a lot of things were borked.

Recipes do not show variables in org-babel   infra

At present when we look at a recipe in the site, we cannot tell what the environment variables are:

./ores.console import ${log_args} --currency-configuration ${currency_config_dir}/currencies.xml

It would be nice if log_args etc showed up in the recipe.

Links:

Install OSX package on OSX machine   infra

We need to install and run the windows package and make sure it works. Check console and GUI start.

Work through all types required for Example 1   code

We want to be able to visualise all the data types needed in order to be able to run the most basic example of ORE. For each of these types, create a stories.

The files are as follows. First, there are the files in the Input directory:

Specifically:

  • currencies.xml
  • netting.xml
  • ore.xml
  • ore_swaption.xml
  • plot.gp
  • portfolio.xml
  • portfolio_swap.xml
  • portfolio_swap_20151023.xml
  • portfolio_swaption.xml
  • portfolio_swaption_20151023.xml
  • simulation.xml

In addition, we need all of the common inputs under:

These are:

  • calendaradjustment.xml
  • conventions.xml
  • currencies.xml
  • curveconfig.xml
  • fixings_20160205.txt
  • market_20160205.txt
  • market_20160205_flat.txt
  • pricingengine.xml
  • todaysmarket.xml

Finally, we need support for the outputs. We can grab these from the expected outputs:

These are:

  • colva_nettingset_CPTY_A.csv
  • curves.csv
  • exposure_nettingset_CPTY_A.csv
  • exposure_trade_Swap_20y.csv
  • flows.csv
  • log_progress.json
  • netcube.csv
  • npv.csv
  • swaption_npv.csv
  • xva.csv

Event Viewer advanced features   code

Advanced features for the Event Viewer dialog that are not part of the initial implementation.

  • Filtering and Search
    • Filter by event type (dropdown with checkboxes).
    • Filter by source (local/remote/all).
    • Text search across all fields.
    • Time range filter.
  • Real-time vs Paused Mode
    • Pause/Resume button to stop event capture temporarily.
    • Max events spin box to limit history (e.g., last 1000 events).
  • Event Statistics Panel

    Collapsible panel showing:

    • Events per second (rolling average).
    • Event type breakdown (pie chart).
    • Most active event types.
  • Export Functionality
    • Export to JSON file.
    • Export to CSV.
    • Copy selected events to clipboard.
  • Event Replay

    Store captured events and allow replay to test widget behavior:

    void replayEvent(const EventRecord& record) {
        // Re-publish the event to the bus
        eventBus_->publish(record.reconstruct());
    }
    
  • Subscription Visualisation

    A secondary tab showing a graph/table of:

    • All known event types.
    • Which widgets are subscribed to each.
    • Subscription counts over time.
  • Breakpoints / Alerts

    Set conditions to pause or highlight specific events:

    • "Break when currencychanged with isocode='USD'".
    • "Highlight all events with more than 10ms processing time".

Far

Stories that we want to capture, but won't work on for a while.

Investigate time series forecasting with LLMs   code

  • TimeCopilot: "TimeCopilot is an open-source forecasting agent that combines the power of large language models with state-of-the-art time series foundation models (Amazon Chronos, Salesforce Moirai, Google TimesFM, Nixtla TimeGPT, etc.). It automates and explains complex forecasting workflows, making time series analysis more accessible while maintaining professional-grade accuracy."
  • A decoder-only foundation model for time-series forecasting: "TimesFM is a forecasting model, pre-trained on a large time-series corpus of 100 billion real world time-points, that displays impressive zero-shot performance on a variety of public benchmarks from different domains and granularities."
  • GH: timesfm: "TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting."
  • HF: reverso: "Efficient time-series foundation models for zero-shot forecasting."
  • reverso: "By combining long convolutions with linear RNN layers, Reverso matches the performance of transformer-based models that are over 100x larger."

Consider integrating whatsapp-mcp   code

A Model Context Protocol (MCP) server that bridges WhatsApp and AI assistants like Claude. It exposes your WhatsApp messages through standardized MCP tools, prompts, and resources - allowing AI to read, search, and send messages on your behalf.

The Vision: Let AI handle your WhatsApp conversations intelligently, with full context and natural language understanding.

Links:

Replace parent entity combo box with server-side searchable widget   code

The parent entity combo boxes in EntityDetailDialog (used by both PartyDetailOperations and CounterpartyDetailOperations) use a single request with offset=0, limit=1000 which silently truncates results if there are more than 1000 entities. The same pattern exists in LookupFetcher for country and business centre lookups.

The fix is to replace the plain QComboBox with a searchable/type-ahead widget using QCompleter that searches server-side as the user types (like the ClientPartyModel=/=ClientCounterpartyModel pagination used in the MdiWindow list views).

Affected code sites:

  • CounterpartyDetailOperations.cpp:117 - load_all_entities()
  • PartyDetailOperations.cpp:118 - load_all_entities()
  • EntityDetailDialog.cpp:346 - loadAllEntities()
  • LookupFetcher.cpp:80,145 - country and business centre fetches

Note: this is low priority - won't impact us until we have >1000 entities.

Add claude continuous learning skill   code

Every time you use an AI coding agent, it starts from zero. You spend an hour debugging some obscure error, the agent figures it out, session ends. Next time you hit the same issue? Another hour.

This skill fixes that. When Claude Code discovers something non-obvious (a debugging technique, a workaround, some project-specific pattern), it saves that knowledge as a new skill. Next time a similar problem comes up, the skill gets loaded automatically.

Links:

Review skills in claude code templates   code

Ready-to-use configurations for Anthropic's Claude Code. A comprehensive collection of AI agents, custom commands, settings, hooks, external integrations (MCPs), and project templates to enhance your development workflow.

Links:

Add support for protocol debugging in Qt   code

We should add a view of all messages sent and received like the comms champ tools. It should be possible to record a session to file (maybe all the requests?) and then replay that session. You should be able to do it from the UI.

Links:

  • GH: cctoolsqt: "This project contains tool application(s), which can be used to develop, monitor and debug custom binary communication protocols, that were developed using the COMMS Library. All the applications are plug-in based, i.e. plug-ins are used to define I/O socket, data filters, and the custom protocol itself. The tools use Qt framework for GUI interfaces as well as loading and managing plug-ins."
  • How to Use CommsChampion Tools: screenshots of the UI to explore the protocol.

C++ Complexity Measurements   code

Since we are no longer taking a close look at the generated code and since Gemini code review does not catch all issues, we should try to find some automated tools to measure complexity and point to places where generated code is not ideal.

Links:

  • GH: metrixplusplus: "Metrix++ is an extendable tool for code metrics collection and analysis."

Create python script to split xsdcpp output   code

We are generating one very large file with all the types. This is not ideal. Also, this will probably not work for FPML as the schema is even larger.

Investigate claude-code-hook-templates   code

A comprehensive collection of production-ready hooks covering all 8 Claude Code lifecycle events. Includes safety guards, autonomous TDD workflows, PR approval gates, cost controls, and more.

Links:

Investigate Agent-Native Architecture Audit   code

Conduct a comprehensive review of the codebase against agent-native architecture principles, launching parallel sub-agents for each principle and producing a scored report.

Core Principles to Audit:

  • Action Parity - "Whatever the user can do, the agent can do"
  • Tools as Primitives - "Tools provide capability, not behavior"
  • Context Injection - "System prompt includes dynamic context about app state"
  • Shared Workspace - "Agent and user work in the same data space"
  • CRUD Completeness - "Every entity has full CRUD (Create, Read, Update, Delete)"
  • UI Integration - "Agent actions immediately reflected in UI"
  • Capability Discovery - "Users can discover what the agent can do"
  • Prompt-Native Features - "Features are prompts defining outcomes, not code"

Links:

Add support for FIX   code

Links:

  • GH: hffix: "The High Frequency FIX Parser library is an open source implementation of tagvalue FIX (classic FIX) intended for use by developers of high frequency, low latency financial software. The purpose of the library is to do fast, efficient encoding and decoding of FIX in place, at the location of the I/O buffer. The library does not use intermediate message objects, and it does no memory allocation on the free store (the “heap”)."

Consider adding "global search" via typesense   code

Typesense:

Typesense is a fast, typo-tolerant search engine for building delightful search experiences.

We could pump specially crafted documents into typesense with IDs and types and then let users search across any type of entity.

Links:

  • GH typesense
  • GH pgtextsearch: Modern ranked text search for Postgres.
    • Simple syntax: ORDER BY content <@> 'search terms'
    • BM25 ranking with configurable parameters (k1, b)
    • Works with Postgres text search configurations (english, french, german, etc.)
    • Supports partitioned tables
    • Goal: state-of-the-art performance and scalability

Add support for PDF generation   code

A lot of legal documents require PDF support. This story keeps track of useful PDF libraries.

Links:

  • GH libharu: "Haru is a free, cross platform, open-sourced software library for generating PDF. It supports the following features."

Add more account commands   code

See the azeroth account commands for inspiration.

Links:

Read up on ECS   analysis

Links:

Consider adding otel support   code

Links:

Add OTLP exporter for OpenTelemetry collector   code

For production observability, export telemetry data to an OpenTelemetry collector using the OTLP protocol.

Tasks:

  • Add OTLP protocol buffer definitions or use existing C++ OTLP library.
  • Implement otlp_log_exporter for log records.
  • Implement otlp_span_exporter for traces.
  • Add configuration for collector endpoint.
  • Consider gRPC vs HTTP transport.

Links:

Add chat support   code

Analysis with Gemini:

Here is the updated Agile Story, refined for a **library-first** architecture. This version ensures the core logic remains independent of any specific UI, allowing you to link it to your Qt trading terminal, a CLI tool, or even an automated LLM service.

---

## **User Story: Modular Binary Chat Library & Event System**

### **1. High-Level Summary**

**As a** system architect,

**I want** to develop a standalone C++ "Chat & Event Engine" library

**So that** real-time communication, system notifications, and LLM interactions
can be shared across our Qt GUI, headless CLI tools, and backend services using
our existing binary protocol.

### **2. Architectural Foundation (The "Engine")**

- **Library Type:** A "headless" C++ shared/static library (`libtradingchat`)
  with **no UI dependencies**.
- **Transport:** **Boost.Asio** for asynchronous TCP/TLS networking.
- **Message Broker:** PostgreSQL **`LISTEN/NOTIFY`** for cross-instance message
  distribution.
- **Concurrency:** The library manages its own `boost::asio::io_context` in a
  background thread to ensure networking never blocks the consumer's UI thread.

### **3. Core Identity & Payload Design**

- **Universal Identity:** Support for three `UserTypes`: `Human`, `System`, and `LLM`.
- **Flexible Payloads:**
- `CHAT_MSG`: Standard binary-serialized text.
- `SYSTEM_ALERT`: High-priority event notifications.
- `RICH_DATA`: Raw binary blobs for charts, order book snapshots, etc.

- **The "Observer" Interface:** The library will use a **Callback API** (via
  `std::function` or an abstract `IChatObserver` class) to notify consumers of
  events like `onMessage`, `onStatusChange`, and `onHistoryLoaded`.

### **4. Technical Tasks for Implementation**

#### **A. Database & Backend (Postgres)**

- Create a schema that stores messages in a `BYTEA` column to preserve your
  bespoke binary format.
- Implement a trigger that notifies a channel based on `RoomID`.

#### **B. Core Library (Plain C++17/20)**

- **Protocol Parser:** Logic to wrap/unwrap your bespoke binary headers.
- **The Manager:** A `ChatClient` class that handles the connection lifecycle.
- **Notification Listener:** A dedicated Postgres connection using `libpqxx` to
  monitor the `LISTEN` stream.
- **History Fetcher:** Logic to request and deserialize "scroll-back" messages
  from Postgres.

#### **C. Consumers (The Frontends)**

- **Qt Bridge:** A `QObject` wrapper that consumes the library's callbacks and
  emits Qt Signals (to be handled by QML or Widgets).
- **CLI Tool:** A lightweight consumer that prints incoming binary data to
  `stdout` for debugging and automated monitoring.

### **5. Acceptance Criteria**

- [ ] **Modular Testing:** The core library can be compiled and tested using a
  CLI-only test runner (no Qt required).
- [ ] **Binary Integrity:** Bespoke binary packets remain bit-perfect from the
  sender, through the Postgres `BYTEA` storage, to the receiver.
- [ ] **Thread Safety:** Messages received on the Boost.Asio thread are safely dispatched to the consumer via callbacks.
- [ ] **LLM Integration:** An LLM service can consume this library to "listen"
  and "reply" as a Type 2 user.

Links:

Setup code quality actions   infra

We added a test password to the repo on purpose to see if it was going to be detected by the github actions:

std::string connection_string("postgresql://ores:ores@localhost:5433/oresdb");

It wasn't. We need to figure out which actions need to be setup for this. Add any other actions we may be missing.

The build seems to be failing:

-- SCCache NOT found.
 CMake Error at /usr/local/share/cmake-3.30/Modules/CMakeDetermineSystem.cmake:152 (message):
   Could not find toolchain file:
   /home/runner/work/OreStudio/OreStudio/vcpkg/scripts/buildsystems/vcpkg.cmake
 Call Stack (most recent call first):
 CMakeLists.txt:61 (project)


 CMake Error: CMake was unable to find a build program corresponding to "Unix Makefiles".  CMAKE_MAKE_PROGRAM is not set.  You probably need to select a different build tool.
 CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
 -- Configuring incomplete, errors occurred!
 ~/work/OreStudio/OreStudio ~/work/OreStudio/OreStudio
 ~/work/OreStudio/OreStudio
 cpp/autobuilder: No supported build command succeeded.
 cpp/autobuilder: autobuild summary.
 Error: We were unable to automatically build your code. Please replace the call to the autobuild action with your custom build steps. Encountered a fatal error while running "/opt/hostedtoolcache/CodeQL/2.18.0/x64/codeql/cpp/tools/autobuild.sh". Exit code was 1 and last log line was: cpp/autobuilder: autobuild summary. See the logs for more details.

This may be due to a missing sub-module for vcpkg.

Consider using getML to integrate ML   code

Links:

  • GH: getml: "getML is a tool for automating feature engineering on relational data and time series. It includes a specifically customized database Engine for this very purpose."
  • user guide quick start

Configure postgres with async IO   code

Consider using sqls for LSP   code

We are presently testing postgrestools. If that does not work well, we should consider sqls.

Links:

Consider adding the update copyrights action from quantlib   infra

We should remove copyrights from each file and instead have it only at the top-level to make maintenance easier.

See copyrights.yml in QuantLib repo.

Consider adding clang-tidy build   infra

As per QuantLib build: tidy.yml.

Consider adding test times build   infra

As per QuantLib build: test-times.yml.

Consider adding sanitizer build   infra

As per QuantLib build: sanitizer.yml.

Investigate git UIs for history display   infra

Links:

  • Gittyup: "Gittyup is a graphical Git client designed to help you understand and manage your source code history. "

Investigate GoldenCheetah for ideas on graph displays   code

Seems particularly useful for report overviews and headline positions.

Links:

  • GH GoldenCheetah: "GoldenCheetah is a desktop application for cyclists and triathletes and coaches."

Investigate the toolbar icon style in serial studio   code

Investigate the graphs used in dirstat   code

Seems like a good way of showing deal contribution to position.

Links:

Add discord support to app   infra

It would be nice if we could post messages from the app into discord.

Links:

Consider adding support for A/B testing   code

At present feature flags are global. However, we may want to enable a feature for a subset of the population. Analysis:

To support **gradual rollout (ramping)** of features—e.g., enabling a feature
for 10% of users, then 50%, then 100%—you need to extend your feature flag
system beyond a simple boolean toggle. The current temporal table design
(`feature_flags` with validity periods) is great for time-based control, but
**not sufficient for user-based targeting**.

Here’s a practical, scalable approach that integrates well with your C++23/Qt/PostgreSQL stack:

---

### 🎯 Goal
Enable a feature **conditionally** based on:
- User identity (e.g., user ID, tenant ID, or client ID)
- A **rollout percentage** (e.g., 10%)
- Optional: user attributes (role, region, etc.)

---

### ✅ Step 1: Enhance the Feature Flag Schema

Add **rollout strategy metadata** to your `feature_flags` table:

```sql
ALTER TABLE oresdb.feature_flags
ADD COLUMN rollout_percentage INTEGER CHECK (rollout_percentage BETWEEN 0 AND 100) DEFAULT 100,
ADD COLUMN targeting_criteria JSONB; -- optional, for advanced rules
```

Now each flag version can specify:
- `rollout_percentage = 10` → enable for ~10% of users
- `targeting_criteria = '{"role": "beta_tester"}'` → (future extensibility)

> Keep your temporal primary key and exclusion constraint — they still apply.

---

### ✅ Step 2: Determine a Stable User Identifier

Your client must provide a **consistent, hashable ID** per user or session, such as:
- User ID (if authenticated)
- Device ID / client UUID (if anonymous)
- Tenant ID (for multi-tenant apps)

This ID must be **stable** across sessions so the same user doesn’t see the feature flicker on/off.

> 💡 In your Qt client, generate or retrieve this ID at startup and pass it to the feature flag service.

---

### ✅ Step 3: Implement Consistent Hash-Based Sampling in C++

Use a **deterministic hash** to map user IDs into a 0–99 bucket:

```cpp
#include <functional>
#include <string>

bool should_enable_for_user(std::string_view feature_name,
                            std::string_view user_id,
                            int rollout_percentage) {
    if (rollout_percentage <= 0) return false;
    if (rollout_percentage >= 100) return true;

    // Combine feature name + user ID to avoid correlation across features
    std::string key = std::string(feature_name) + ":" + std::string(user_id);

    // Use a stable hash (e.g., std::hash, but note: not guaranteed consistent across runs)
    // For production, prefer a consistent hash like FNV-1a or CityHash
    auto hash = std::hash<std::string>{}(key);
    int bucket = static_cast<int>(hash % 100);
    return bucket < rollout_percentage;
}
```

> ⚠️ **Warning**: `std::hash` is **not guaranteed to be stable across program runs or platforms**.
> For reliable rollout, use a **stable hashing algorithm** like **FNV-1a**, **MurmurHash3**, or **xxHash**.

Example with a simple FNV-1a (C++23-friendly):

```cpp
constexpr uint32_t fnv1a_32(std::string_view s) {
    uint32_t hash = 2166136261u; // FNV offset basis
    for (char c : s) {
        hash ^= static_cast<uint8_t>(c);
        hash *= 16777619u; // FNV prime
    }
    return hash;
}

bool should_enable_for_user(std::string_view feature_name,
                            std::string_view user_id,
                            int rollout_percentage) {
    if (rollout_percentage <= 0) return false;
    if (rollout_percentage >= 100) return true;

    std::string key{feature_name};
    key += ':';
    key += user_id;

    uint32_t hash = fnv1a_32(key);
    return (hash % 100) < static_cast<uint32_t>(rollout_percentage);
}
```

This ensures:
- Same user → same decision
- Uniform distribution across users
- Independent per-feature rollout

---

### ✅ Step 4: Update Your Feature Flag Service

Modify your C++ `FeatureFlagService`:

```cpp
class FeatureFlagService {
public:
    // Returns true if the feature is enabled AND user qualifies
    bool is_enabled_for_user(Feature f, std::string_view user_id) const;

private:
    struct FlagConfig {
        bool enabled;
        int rollout_percentage; // 0–100
        // add targeting_criteria later if needed
    };
    std::unordered_map<Feature, FlagConfig> cache_;
};
```

Implementation:

```cpp
bool FeatureFlagService::is_enabled_for_user(Feature f, std::string_view user_id) const {
    auto it = cache_.find(f);
    if (it == cache_.end() || !it->second.enabled) {
        return false;
    }
    return should_enable_for_user(to_string(f), user_id, it->second.rollout_percentage);
}
```

---

### ✅ Step 5: Integrate in Qt UI

```cpp
QString userId = getCurrentUserId(); // from auth, settings, etc.

if (featureFlagService.is_enabled_for_user(Feature::ExperimentalUI, userId.toStdString())) {
    ui->experimentalPanel->show();
}
```

---

### ✅ Optional: Advanced Targeting (Future-Proofing)

Use the `targeting_criteria JSONB` column to store rules like:
```json
{
  "min_version": "2.1.0",
  "regions": ["EU", "NA"],
  "roles": ["admin", "beta"]
}
```

Then evaluate these in C++ (or push filtering to DB with JSONB queries if you fetch per-user flags).

But for **simple ramp-up**, percentage-based hashing is sufficient and efficient.

---

### ✅ Operational Benefits

- **No client-side config changes** needed to ramp from 10% → 50% → 100% — just update the DB.
- **Auditability**: Every rollout change is versioned in your temporal table.
- **Safety**: You can instantly roll back by setting `rollout_percentage = 0`.

---

### Summary

| Component           | Change |
|---------------------|--------|
| **Database**        | Add `rollout_percentage INT` to `feature_flags` |
| **C++ Service**     | Add `is_enabled_for_user(feature, user_id)` |
| **Hashing**         | Use stable hash (e.g., FNV-1a) over `feature:user_id` |
| **Qt UI**           | Pass user ID to flag checks |

This gives you **safe, gradual, observable rollouts** while keeping your architecture clean and performant.

Docker setup   code

It would be nice to be able to just do a docker pull and get an environment ready to use with database, services etc for a given version.

LLM Integration   code

LLMs can be useful when learning a new subject as they can provide additional context to the information displayed in the screen. For example, a user can ask the LLM to explain a graph or a table. It would probably be fairly straight forward to allow dumping some of the information in a format that is friendly to LLMs (.e.g./ PNG, Markdown, plain text) and then make an API call to a local or remote LLM. We could probably create a set of useful canned prompts (explain this report, explain this chart).

On a more blue skies approach, one could conceive asking the LLM for suggestions on how to act, on the basis of the analysis. This could result in suggestions for action the user could implement, or even on actions directly taken based on the LLM's suggestions. This is conceptually straightforward: the LLM could for example generate a well defined JSON with the proposed action, and the system would look for some predefined markers in the LLM output:

----- ACTION START
<JSON>
----- ACTION END

The JSON payload would describe the action:

{
    "action": "some_action_type",
    "key1": "value1",
    ....

A trivial lookup table could de-serialise the JSON and execute the action. All that is required is for the LLM to "learn" how to generate JSON compliant with the desired format, which should be quite straightforward (perhaps with the help of fine-tuning). Agents probably provide most of this infrastructure already. The key thing is to ensure all functionality in the core becomes UI agnostic such that one could bolt an NLP UI around it.

Links:

Support multiple ORE "toolchains"   code

Much like with an IDE, where one can have multiple toolchains configured, we need to also support multiple versions of ORE. Unlike with IDEs, it may be desirable to run computations with more than one version of ORE for comparison purposes. This means we need a way to associate outputs with their ORE version. This approach does not necessarily fit the existing example code, because these have a single "output directory". However, we just need way to associate N toolchains with a given workspace or possibly component; when present, the output directory starts to reflect the toolchain configuration. For example, with CMake we use presets:

  • linux-clang-debug
  • linux-clang-release
  • linux-gcc-debug
  • linux-gcc-release

For ORE the only dimension under which variability is possible is the version. We can then have pricing engine configurations that are either the same, or possibly different:

  • for a workspace;
  • for a component;
  • for a toolchain version.

In addition we also want to support multiple versions:

  • Nightly / Latest.
  • Release vX
  • AAD version.

Add skill to review product backlog   infra

We could get claude to review the stories, add more details, etc. Needs more thinking.

Add skill to update vcpkg   infra

As per instructions in readme.

Investigate virtual portfolio arbitrary hierarchy construction   refdata

The current portfolio explorer and portfolio data model may not correctly support arbitrary virtual portfolio hierarchies. Virtual portfolios are reporting overlays that can aggregate books from any real portfolio, but parent_portfolio_id on the portfolio table creates a strict tree which may prevent users from constructing the hierarchy they want (e.g. selecting the correct parent in the UI when many portfolios exist). Investigate and fix:

  • Determine whether parent_portfolio_id is sufficient for virtual portfolio membership or whether a separate junction table is needed.
  • Fix any UI issues in PortfolioExplorerMdiWindow that prevent users from correctly selecting a parent portfolio when building the hierarchy.
  • Ensure virtual portfolio nodes display correctly in the portfolio tree.
  • Consider whether virtual portfolios should be allowed to cross legal entity boundaries.

Footer

Previous: Agile

Emacs 29.1 (Org mode 9.6.6)