Comments are welcome - we want to get to implementation next week
Notes
Decentralized Lot Management
Current implementation is to manage items and lots centrally, there's one admin right for both of these.
data quality was the big reason to choose this
Angola needs:
not centralized
each user managing stock at the facility could manage lots as they go
through the physical inventory and receive processes
search first, then add lot as they go if it doesn't exist
Need something before next core release
What's the minimum working version that'd be accepted in core
configuration option to select either centralized or not-centralized
keep the boundaries between ref data and stock mgmt correct
in Angola it'd be a UI and right change (split), in core we'd likely want this to be more comprehensive
seems like lots would need to be created before a stock event could be fully processed, as a lot id is part of the stock event data
strategy for de-duplication:
report for mgmt to find dupes
stock events and stock line items are written in stone - they can't change. We should keep this in mind that UPDATE on certain tables are not allowed.
Performance tests
They've been failing since forever
Sometimes a test starts to fail, it could be a problem in starting ref data, could be a red herring, could be an actual failure of not meeting criteria
functional tests don't seem to be having the same issues.
Not many days that performance tests pass, see link above
oftentimes the results vary wildly between runs
What we've done:
warmup phase
What value do we have for these tests?
Perhaps none
The small bumps every 5 builds, 10 builds, etc add up to a failure for the test suite almost all the time.
To make them valuable:
We could continue playing the raise the limit game
We could look at trend lines instead of pass/fail indicators
Want to revisit this topic, we'll create a ticket and brainstorm ideas for improve reliability
Casper
Migration from v2 to v3 over a long period of time so that users don't face a hard-switch point
Focused on moving Requisitions between systems
One way data pipeline
could this be both ways? yes, the main work is in building the transformation steps in Nifi, so quite possible.
Another use case could be:
in onboarding to v3, we could use a bi-way flow that would allow users to start using v3, and if they encounter a bug they could fall back to v2.
Using data-pumps and Nifi
Kafka
Debezium
Nifi
Kafka Connect
Saw data streaming out of v2 and into v3 in near-real-time
Want to move this tech into our reporting stack
How resilient is the tech? Is it mature?
Restarting the stack is working, pieces can be brought down without the whole thing falling down