LTV Service Design Proposal

The initial service proposal written by Luka

This page describes a proposed implementation of the LTV service based on the requirements outlined in Loan To Value (LTV) Monitor System:. It primarily tries to solve the following problems

  • Scalability - Since the classification of loans(and the execution of required tasks during changes in classification) are a time-sensitive task, this design must ensure that the amount of loans in the database has a minimal impact on the system's ability to process loans.

  • Modularity - Currently the requirements outline 4 "buckets" of loans, but they require the system to be easily extendible to an arbitrary amount of bucket types.

System Overview

The main components of this system are a Postgresql database and a pool of workers using this database as both a task queue and a store of information about loans.

Database

Loans Table

This table has two purposes. One is to store information that is relevant to the LTV service's bucket classification functionality. This can be anything from collateral values, LTV, borrower, etc. The second is to store information about the processing status of the loan. By having the 'last processed' and 'worker node id' columns, the service can use Postgres's atomicity to implement a lease system that will ensure the workers' ability to process loans in parallel.

  • ID - an identifier for a loan. This could be a token ID of a loan, or an internal ID(if its needed for easier indexing)

  • Bucket ID - a foreign key to an entry into the 'Buckets' table. This column shows which bucket this loan was put into the last time it was processed

  • last_processed - This is a timestamp showing when this loan was last processed. This column, along with the bucket id, is used by the workers to identify if it needs to be processed now.

  • worker_node_id - This is a nullable column showing which node is currently processing the loan. The nullability is important, because the column being null essentially tells all nodes that this column is currently available to be 'leased' and processed.

  • Loan Data - this is not a single column, but represents a group of columns that could be used to store any information that is relevant to this loan.

Buckets Table

This table stores a list of 'bucket types'. It is not a table that is meant to be updated by the application itself, but rather a configuration table that could allow operators to add buckets or change the parameters of existing buckets on the fly(This is not recommended in this proposal, but a possibility)

ID - an identifier for the bucket

Name - a human-readable name for the bucket. Not essential, but helpful for operators

processing_period - this is an interval postgres type signifying the

ltv_min - a lower bound for the LTV that a loan must have to be in this bucket

ltv_max - an upper bound for the LTV that a loan must have to be in this bucket

The ltv_min and ltv_max columns can be unified using a numrange postgres type if needed.

Worker Application

The worker application can be considered an application that runs in an infinite loop where it tries to acquire a lease and execute pre-defined processing logic on a loan if it manages to acquire it. The pre-defined logic includes both the logic of calculating LTV, matching that LTV to a bucket, and executing the logic for the bucket-to-bucket transition if applicable.

Leasing Logic

The leasing logic relies on the fact that Postgres has atomic transactions. By finding 'processable' leases and leasing them within the same query, workers can make sure that no two workers will process the same loan at the same time. The structure of the query is as follows

UPDATE
    loans l
SET
    l.worker_node_id = [WORKERS ID]
WHERE
    l.worker_node_id is null
    AND last_processed is before (
        now() - (
            select 
                b.processing_period 
            from 
                buckets b 
            where 
                b.id = l.bucket_id
            )
        )
RETURNING
    *

This system must give some consideration of transaction isolation levels in postgres. Ideally, these transactions will be done in the 'SERIALIZABLE' isolation level to prevent dirty reads from affecting the leasing logic.

By relying on this logic, the loan processing logic of the workers can be executed with the guarantee that there is no other node executing the logic on the same loan.

Loan Processing

After the loan is leased, the processing can begin. Once processed, the worker can make a query to 'release' the lease and update any loan-related data if applicable. The actual processing can be split up into two steps.

Loan Classification

The first step of the processing is checking if the loan has breached the threshold for the bucket it is currently in. To do this, the worker has to calculate the loan's LTV.

This calculation will involve querying multiple different parts of the protocol/backend. Mainly, it has to get the collateral and the loan amounts from the smart contract(Comment above about this). Afterwards, it has to query the price oracle to convert these amounts to some common asset. Once these actions are done, the LTV can be calculated and based on the parameters of the loan's current bucket its determined if it needs to be put into another bucket.

Action Execution

The idea of having loans move through different buckets can be thought of as a sort of state machine. The loans have a state, movement between these states gets triggered based on a set of rules, and a predefined set of actions has to happen as part of each type of state transition.

Because of this, we can write easily-readable code by having a map of state-to-state transitions(something like a tuple), and a function that is triggered for each transition. These functions can all have an unified interface(basically just the Loan object and maybe some metadata), and the handler for state-to-state transitions can remain separate for the business logic of each transition.

As for the types of actions that these functions will execute

Last updated