Skip to content
This repository has been archived by the owner on Aug 12, 2022. It is now read-only.

Ensure CQ_ID is unique Across entire fetch #251

Open
bbernays opened this issue May 13, 2022 · 2 comments
Open

Ensure CQ_ID is unique Across entire fetch #251

bbernays opened this issue May 13, 2022 · 2 comments
Assignees

Comments

@bbernays
Copy link
Contributor

In order to decouple database write issues from API/resolving issues we need the SDK to keep track of all CQ_IDs that are inserted to ensure that every cq_id (at a top level table) is unique. When it is not unique then we can be 100% sure it is a fetching problem

@erezrokah
Copy link
Contributor

Could we make cq_id always a new UUID instead of a hash on primary keys? What would be the downside of that?

@shimonp21
Copy link
Contributor

Adding more context after discussion with @zagronitay and more careful reading of the code.

There was a concern that PK issues will be hidden in light of #266,
But, looking at the code, if saveToStorage receives a list with duplicate PKs, we will see a duplicate-PK error.

  • diags := diag.Diagnostics{}.Add(fromError(err, diag.WithType(diag.DATABASE), diag.WithSummary("failed bulk insert on table %q", e.Table.Name)))
    • if we get resources with same PK, "copyFrom" will fail, "bulk-insert" will also fail due to duplicate-PK, and we will get the diag sent to us. elemeny-by-element insert will succeed (only one of the items will be in the final table).

to summarise, my working-theroy is that the duplicate-PK issue in aws_sns_subscriptions is a real issue (not a bug in execution.go).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants