Skip to content

Meeting Notes 2022 10 03

Elias Rohrer edited this page Oct 3, 2022 · 1 revision

Releases

  • 0.0.111
    • Elias fixing compilation error here, waiting on some CI coverage but it’s close
    • Matt: q: should we do a faster 112 if someone’s trying to use the “feature” feature? Bc it’s not working rn, specifically for the rust-async stuff
    • The actual feature object you can use w/o rust async stuff, just does a callback. So that works fine for bindings
    • Bindings not done, C bindings should be done but matt still working on java bindings
    • PR for java bindings, would be nice if someone took a look at it. Mostly done.
    • Swift side: swift code itself technically works and running in xcode works, but compilation of the bindings is not working. Trying to find a combo of macos, xcode, etc that will allow arik to compile for catalyst
    • Jurvis: i can pair with you arik
  • 0.0.112 (https://github.com/lightningdevkit/rust-lightning/milestone/29)
    • Maybe should push this out quick if someone’s waiting on it
    • But if no one’s using it, it was kinda added for sensei and
    • John cantrell: not a huge blocker for me. Bdk blocking for 2 releases. but i would use it
    • Lexe client may also want it but unless they complain …
  • 0.1 (https://github.com/lightningdevkit/rust-lightning/milestone/1)
    • Matt: my thinking is once we’re happy w stability of all the features we have, biggest blocker being async persist stuff, kinda need to rewrite a big chunk of it, landed 1 PR and a number left to go, that was always my thinking was once we get that stuff done then we just call it 0.1
    • Also bc we feel pretty comfortable with how stable the library is overall. Certainly have a bunch of people using it in prod and it seems to work

Roadmap Progress

  • Developer support
    • Conor: nothing crazy. RGS livestream last week, so if anyone asks about it you can point them to it and the blog post
    • Tabconf next week, so dedicated space on builder’s day for people who want to contribute to LDK and for users of LDK to get in-person support from spiral devs etc
    • Matt and val speaking/panels
    • Adding some visuals from jeff/arik from btc++
  • Payment protocols
    • Onion msgs blog post tmrw hopefully
    • Async payments are gonna be a focus soon
    • Things moving on offers encoding
    • Mostly a review bottleneck now
    • val<>jeff to touch base on next steps for offers/payment protocols and dividing work
    • OM pathfinding coming along but low prio due to direct-connect-always for v1
    • Supporting custom onion messages has a PR open, which tees us up for offers messaging and async payments
  • Language bindings
    • Discussed above
    • May be inspired by bdk’s approach using uniFFI but unclear how much we wanna go down that path bc we don’t get some languages that way
  • Taproot support
    • Arik: not really much new on this, except that htlc sigs are now also working
      • Halfway into last week i had to switch to working on swift
    • Thankfully TR now in a pretty good state
    • Momentum with lnd is going well too
    • Bolts and specs need to move forward, waiting on some responses from laolu
  • Anchor outputs
    • Wilmer: PR is up still waiting on review, i think now that matt’s back it should be getting some eyes soon
    • Ariard: almost good here IMO
  • LSP
    • John carvalho: as before we’re making progress on the marketplace api we’ve been working on
    • Think we’ll have a first version prob after the next meeting, tho it will be delayed a few weeks due to conference
    • A bit of headway from LL being there so trying to resolve whether this is sth that would be supported w Pool and such
    • Still need to talk to people about liquidity ads and how that fits in
    • Have meeting notes if anyone wants to dig thru them
    • Cdecker tends to be in attendance, zizek attending too but doesn’t say much, would be nice to have lisa there
    • Also been working on VASP regulations, talking to lawyers, and how this may relate to LSPs, bc our LSP is deeply integrated into our upcoming wallet, so kinda a minefield
    • Tricky bc of the wording being very broad
    • Steve: i’m chatting w Block about that
  • WSP (Wallet Storage Provider)
    • Gursharan: synced w devrandom on this
    • I think we are thinking of having # items at the impl level so they can each have their own limits
    • E.g. One wallet provider wants to use tx limit of 1000, postgres backend. Another might want some other KV db w/ tx limit of 100-500
    • I will sync up more w devrandom about that
    • Matt: that’s gonna be really awkward .. how much of the goal of the project includes a desire to have the ability to swap out storage providers and how much is the goal to define some common code that an operator of a wallet vendor can use to store data on behalf of their users? If done at impl level and not at standard/api level, then it’s only useful as “wallet vendor runs service to backup data for users” vs doing it at api level, you can kinda say “i can connect to any WSP server or even run my own, or…”
    • G: rn the initial scope is for first party WSP, as in the wallet provider or somebody’s providing that storage. Synced w steve and initial goal is to have 1st party support and then maybe we can see “what do we need to get to 3rd party support”
    • G: rn limitation is # of items in a tx. Eg if you wanna do a transactional write of 1000 items
    • Devrandom: we have a more normalized schema, each payment is a separate row, and when you commit a commitment tx you are updating all the payments that are affected by this transaction in the same atomic db tx
    • Since theoretically a commit tx can fulfill 583 or wtv HTLCs and create 583 new htlcs, can have a max of 1000ish payments affected
    • So bc we have a normalized schema, the # of items we touch in a tx is up to 1000ish
    • Ldk’s a bit diff, bc you have larger things like the chanmon and chanman which hold all the payments then get updated all at once
    • So that’s why we have that requirement for a larger # of items in a db tx
    • But you guys have requirement to put more into one row/cell of a db
    • Which may also be a problem on amazon dynamodb
    • I think y’all might wanna switch to inserting historical pmt hashes 1 tx at a time instead of storing them in one object in one go bc chanmon has unbounded storage in general and a bunch of details, not sure we can cover them rn
    • Discussing whether it’s possible to have an api that can work with dynamodb in some use cases, maybe not vls or maybe vls w specific config and then have another config that’s backed by postgres and doesn’t have any such limitation
    • Matt’s point is also that we wanna support storage service that works for a variety of use cases, don’t have to worry about who you connect to
    • Matt: also that it’s awkward that it becomes no longer a standard, just an api but you can’t really swap out the server side, it becomes tightly coupled w client side. If that’s how it has to be, it is what it is, but seems awkward and would be very nice if we could avoid that
    • Matt: sounds like postgres is the lowest common denominator, that may end up being the thing
    • Ariard: i think you’ll care about latency if you wanna support routing nodes, if slow db you’ll be outta the market
    • Matt: if we’re talking about routing nodes, you prob shouldn’t be talking about a remote server storage, so … idk if that’s in scope here
    • Devr: postgres has a few milliseconds latency, not sure if it’ll impact perf that much. Only high latency when doing 100s of txs at once, far from normal case
    • Ariard: anyprevout should solve latency, just have to load balance chan storage between peers or so. So may not need to over optimize rn
    • G: issues with an sql or postgres, one is latency, the other is it is not that predictable in scaling. So for any sql backend we will generally have to do vertical scaling, and there are kv stores that support horizontal scaling and give guarantees on same perf when 100 users vs 1mil users. For sql, simply not true. There are strict limits that we will hit if we wanna go into the direction where there are 3rd party storage providers. So idk if sql can support that scale for a large wallet or 3rd party storage provider w millions of users
    • Devrandom: ithink it can scale horizontally bc it can shard by client (each ln node)
    • Matt: can automate that w normal postgres sharding stuff, per keys
    • G: that is operation burden and a manual thing
    • Matt: but i assume if u run a wallet for 100 mil users, you can prob stand operational overhead
    • G: i don’t expect a wallet or storage provider to be running a datacenter and managing their own storage, bc it is unsafe if you lose the data center (funds loss). So therefore we want application datacenter redundancy out of the box
  • LDK Lite

Dependent Projects

  • VLS (https://gitlab.com/lightning-signer/validating-lightning-signer)

    • Kensedgwick: working on stm32 demo, invoice approval layout. Rly small screen, trying to put pertinent info about an invoice on it so users can approve/decline
    • Found and fixed a controller reset bug
    • Experimenting w activity display on stm32. So what can we do w small amount of screen area to show what’s going on in a node to help users debug/show what’s happening?
    • Devrandom: moving w postgres backend, should have sth working e2e in a few days
  • Sensei (https://github.com/L2-Technology/sensei)

  • Synonym (https://github.com/synonymdev/ldk-node-js)

    • Cinnamon lol
    • J and cory are gearing up for mainnet test on friday
    • We do app testing on fridays
    • Been testing on regtest for a while, gonna try to test on mainnet now
    • App launch in october
    • No progress on combining synonym rnldk w bluewallet’s rnldk, prob not happening

Spec

  • 2022/09/26 (https://github.com/lightning/bolts/issues/1028)
    • Viktor has a PR related to this, dropping support for legacy onion payload format
    • Been updating the test vectors. Made a PR to drop legacy enums, haven’t had time to actually code the removal of everything regarding legacy onion but perhaps it can be merged w/o that part or in same PR
    • When constructing onion packet with our utils, it doesn’t match test vectors even though our inputs are the same.
    • Jeff: plz add me as a reviewer, can discuss offline

Misc

  • review begs?
Clone this wiki locally