(cloud) running balance update way more streamlined now
This commit is contained in:
@@ -1,8 +1,9 @@
|
||||
it looks like there are a bbunch of orrphaned customizations for accounts, breaking indexes
|
||||
upsertledger - matching transaction rule might not assign an account. Other things might not assign accounts. This is an assertion that is commented out. Determine consequence of disabling
|
||||
|
||||
Fix searching
|
||||
* indexing should happen more regularly, and just look for changes since last time it was run
|
||||
Double check each job still functions in the new system
|
||||
Reconcile ledger. Does it work? What are the downsides? Can it be made faster now?
|
||||
Make reports just be based on running-balances
|
||||
When you add a vendor, it should be searchable immediately
|
||||
|
||||
Running Balance Cache
|
||||
* Add tests for upsert-ledger
|
||||
@@ -12,17 +13,6 @@ Running Balance Cache
|
||||
Address memory
|
||||
* JVM settings now and in prod
|
||||
|
||||
Release steps:
|
||||
Stop prod
|
||||
Make database snapshot
|
||||
Create new database for prod-cloud (just called prod)
|
||||
Restore database
|
||||
Transact new schema
|
||||
(reset-client+account+location+date)
|
||||
(force-rebuild-running-balance-cache)
|
||||
Merge branch into master
|
||||
Rename prod-cloud to prod everywhere
|
||||
Release again
|
||||
|
||||
Sanity checks later:
|
||||
* Run query
|
||||
@@ -30,6 +20,46 @@ Sanity checks later:
|
||||
|
||||
Future improvements:
|
||||
Make reports just be based on running-balances
|
||||
Just use a periodic request or event instead of a job for running balance cache, and perhaps others too
|
||||
get rid of account-groups
|
||||
move to solr
|
||||
upsertentity Look at how address works on client save. There's agood chance that we should make saving a rel with only a temp id just resolve it to null
|
||||
|
||||
|
||||
|
||||
Release steps:
|
||||
Set prod web workers to 0
|
||||
|
||||
Make database snapshot (run export-job)
|
||||
(ecs/run-task (cond-> {:capacity-provider-strategy [{:base 1 :weight 1 :capacity-provider "FARGATE_SPOT"}]
|
||||
:count 1
|
||||
:cluster "default"
|
||||
:enable-ecs-managed-tags true
|
||||
:task-definition "XXX"
|
||||
:network-configuration {:aws-vpc-configuration {:subnets ["subnet-5e675761" "subnet-8519fde2" "subnet-89bab8d4"]
|
||||
:security-groups ["sg-004e5855310c453a3" "sg-02d167406b1082698"]
|
||||
:assign-public-ip AssignPublicIp/ENABLED}}}
|
||||
true (assoc-in [:overrides :container-overrides ] [{:name "integreat-app" :environment [{:name "args" :value (pr-str {:backup "63646188-90cd-4cec-a115-feeb7e33d54d"
|
||||
:starting-at "sales-order"})}]}])))
|
||||
|
||||
Create new database for prod-cloud (just called prod)
|
||||
(dc/create-database conn {:db-name "prod"})
|
||||
|
||||
Set this in the prod-cloud config file and prod-cloud-background-worker
|
||||
|
||||
Restore database
|
||||
(ecs/run-task (cond-> {:capacity-provider-strategy [{:base 1 :weight 1 :capacity-provider "FARGATE_SPOT"}]
|
||||
:count 1
|
||||
:cluster "default"
|
||||
:enable-ecs-managed-tags true
|
||||
:task-definition "restore_from_backup_prod_cloud:3"
|
||||
:network-configuration {:aws-vpc-configuration {:subnets ["subnet-5e675761" "subnet-8519fde2" "subnet-89bab8d4"]
|
||||
:security-groups ["sg-004e5855310c453a3" "sg-02d167406b1082698"]
|
||||
:assign-public-ip AssignPublicIp/ENABLED}}}
|
||||
true (assoc-in [:overrides :container-overrides ] [{:name "integreat-app" :environment [{:name "args" :value (pr-str {:backup "63646188-90cd-4cec-a115-feeb7e33d54d"
|
||||
:starting-at "sales-order"})}]}])))
|
||||
|
||||
Merge branch into master
|
||||
Rename prod-cloud to prod everywhere
|
||||
Release again
|
||||
git push deploy master
|
||||
|
||||
Reference in New Issue
Block a user