Greetings! We are doing some testing with the community edition of Infinidb. We've found good performance on insert-only loads(nearly 800-1000 rows/second), and good query performance.
However, our data integration architecture involves capturing changes on the source databases and incrementally applying those changes on the target via "upsert" functionality. In other words, we first attempt an update...if no rows were updated, then we perform an insert. Fairly simple approach.
We've seen upsert performance of about 1-2 rows/second. :ohmy: Essentially, it takes us just as long to process 1000 changed rows as it does to just drop/re-insert 1000000+ rows. Presumably, this is because Infinidb does not currently support primary or unique keys, nor any explicitly-created indexes.
Is this supported in the commercial edition? If not, is support for this planned any time in the future?
Thanks and best regards,