Dopple DEX: Solana and CosmWasm
A Modular, Minimal Multi-Program DEX
Part 1: CosmWasm vs. Solana for Rust Smart Contract Engineers
> Part 2: Dopple DEX in Solana and CosmWasm <
Part 3: Dopple DEX Continued: Finishing the Implementation
Part 4: Testing Dopple DEX with LiteSVM
To cement the differences and similarities that we went over in the last article, let's design and build a decentralized exchange (DEX) that swaps between two fungible tokens and implement it on CosmWasm and on Solana – using solana_program
, pinocchio
, and Anchor. I haven't yet added Anchor or pinocchio.
In this chapter, I'll provide you the full code, and then we'll walk through the overall architecture and the implementation of a specific important function (CreatePool/InitializePool).
The rest of the implementation will be covered in the next chapter, and then tests in the next – but both full implementation and full integration testing is already included in the repository.
Table of Contents
- Accessing the Code
- The Dopple DEX Design
- CosmWasm and Solana DEX: Differences Summarized
a. CosmWasm Contracts/Solana Programs
b. CosmWasm Contract State
c. CosmWasm Messages
d. Solana Instructions
e. Solana Account Info - CreatePool Implementation on CosmWasm
- InitializePool Implementation on Solana
Accessing the Code
You can access a repo with both versions and full simulated-network integration tests here:
GitHub - rustopian/dopple-dex: Modular AMM DEX implemented in both CosmWasm and Solana.
If you want to verify everything is working, run cd CosmWasm && cargo test
for CosmWasm, then cd ../Solana && cargo build-sbf && cargo test
for Solana. If you encounter any environment errors, make sure you have Rust installed and the necessary targets – see the README files for more details.
Success looks like the output below. This is Solana, but the CosmWasm test looks very similar:
Running unittests src/lib.rs (target/debug/deps/constant_product_plugin-c22617bee29c6be5)
running 9 tests\
test processor_tests::tests::test_compute_add_liquidity_existing_pool ... ok\
test processor_tests::tests::test_compute_add_liquidity_first_deposit ... ok\
test processor_tests::tests::test_compute_add_liquidity_zero_deposit ... ok\
test processor_tests::tests::test_compute_remove_liquidity ... ok\
test processor_tests::tests::test_compute_add_liquidity_large_numbers ... ok\
test processor_tests::tests::test_compute_remove_liquidity_burn_all ... ok\
test processor_tests::tests::test_compute_remove_liquidity_burn_zero ... ok\
test processor_tests::tests::test_compute_swap ... ok\
test processor_tests::tests::test_compute_swap_zero_input ... ok
test result: ok. 9 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/lib.rs (target/debug/deps/dex_pool_program-014eda6f32d07c3e)
running 5 tests\
test test_id ... ok\
test processor_tests::tests::test_process_swap ... ok\
test processor_tests::tests::test_process_initialize_pool ... ok\
test processor_tests::tests::test_process_remove_liquidity ... ok\
test processor_tests::tests::test_process_add_liquidity ... ok
test result: ok. 5 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running tests/integration.rs (target/debug/deps/integration-b41c5b847e89d802)
running 10 tests\
test test_initialize_pool_already_exists ... ok\
test test_initialize_pool_litesvm ... ok\
test test_add_liquidity_zero_a ... ok\
test test_add_liquidity_simple ... ok\
test test_remove_liquidity_partial ... ok\
test test_remove_liquidity_zero ... ok\
test test_remove_liquidity_simple ... ok\
test test_swap_b_to_a ... ok\
test test_add_liquidity_refund ... ok\
test test_swap_a_to_b ... ok\
test test_wsol_pool_full_cycle ... ok
test result: ok. 10 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.32s
The Dopple DEX Design
We'll keep things minimal but useful:
- Users can create liquidity pools with custom logic. We'll add a constant-product (
x * y = k
) market maker, like Uniswap v2 and most early AMM DEXes. However, the system will be easily extensible to include other liquidity pool types: meaning that a dedicated, independent plugin contract/program will handle the pool logic. - Users can deposit liquidity into pools.
- Users can swap from one token to another. Fees taken on swaps go back to the liquidity pool, as usual, growing its value over time (all else being equal).
- Users can withdraw liquidity from pools.
This gives us our 4 main actions right away:
- CreatePool
- Deposit
- Swap
- Withdraw
What We'll Skip
To ward off any unnecessary details, we'll ignore these:
- Governance or authority over pool creation. Anyone can create pools, but only one pool can exist for any given asset pair with any given logic plugin.
- Locked liquidity, staked liquidity, etc. All liquidity is subject to withdrawal at any time.
- Single-token withdrawal. When LP tokens are burned, they give the user tokens from both sides, in equal measure according to the current balance between pools.
- For CosmWasm, we'll ignore CW20 tokens — most assets of value are Token Factory or IBC assets, which act like native assets. Our Solana DEX, however, will definitely need to support SPL tokens in addition to SOL. We'll still demonstrate plenty of multi-contract interaction on the CosmWasm side, since LP tokens will be their own CW20 contracts.
I do want to demonstrate how to do some things analogously, and that will inform our design choices. For example, it would be more compute-efficient, though less modular, to build the Solana DEX in a monolithic program. But, I would like to demonstrate using "component programs" and doing multi-program tests — so we'll build the part of our DEX that handles AMM math as a separate plugin program which will be invoked using cross-program invocations (CPIs).
Challenges
Both sides present us with a few challenges to keep our DEX usable, modular, and safe:
- Neither platform can work with floating point numbers, yet AMM DEXes must do division. We want users to get the best deals they can with whole numbers, but we must always make sure that the liquidity pool is not drained a little bit when withdrawals are made.
- Pools may in the future be created which follow logic other than a constant product (x*y=k) curve. While we don't need to add any logic for other types of pools, we should keep our pool logic as an independent component so that expanding to new pool types easy is quick and simple.
- As usual in any blockchain program system, we'll need to appropriately validate accounts and inputs.
The Constant Product algorithm allows users to swap by giving some amount of Token A and receiving some amount of Token B – or vice versa. The formula (ignoring fees) is: ΔB = B_reserve * ΔA / (A_reserve + ΔA)
(i.e., x * y = k
invariant). We will include a small fee, configurable by the contract owner, and set it to a typical 0.3%.
CosmWasm and Solana DEX Differences Summarized
On CosmWasm, each liquidity pool is a contract that holds balances of A and B – all of the various tokens are actually held in the pool contract's balances, without requiring separate token account addresses. A liquidity pool contract also must have access to records of how much of the pool any given user has. The easiest way to do this, to avoid mathematical problems, is to issue new LP tokens when users deposit liquidity and to burn LP tokens when liquidity is withdrawn.
On Solana, the pool is a program that controls two token accounts: a vault for TokenA, and a vault for TokenB. However, the core principles are exactly the same.
Smart Contracts/Programs
CosmWasm Smart Contracts
- LP Token Contract: This is a
cw20-base
Contractcw20-baseFrom crate: cw-plusClick to view documentation → contract that is deployed on demand by the DEX. We won't actually write this, just like we won't write Solana's Token Program from scratch, but our tests will instantiate it. - LP Logic Contract (
pool-constant-product
): - DEX Contract (
dex-factory
): This contract is a keeper of LP tokens: a factory (which instantiates new LP token contracts) and a database of existing contracts. Multi-hop routes are not in scope for us here: in order toSwap
from token A to token B, a liquidity pool must exist directly between the two. The client application is free to form a multi-message transaction in order to do multi-hops without needing direct support from the contract.
Solana Programs
- Constant Product Plugin: This isn't an exact parallel – it's a math-only contract. This will show better isolation than our CosmWasm pattern, at the expense of a less independent testing interface: we'll have the instructions
ComputeAddLiquidity
,ComputeRemoveLiquidity
, andComputeSwap
. Later pools can substitute a different plugin program ID to easily select from or create any desired pool type. - DEX Program: Unlike our CosmWasm DEX contract, our Solana DEX program will not keep a repository of pools in state>; the various accounts a pool requires are derived accounts (PDAs), and they can be passed in by the client and verified. Multi-hop routes are still not in scope, though similarly to CosmWasm, the client application is free to form a multi-instruction transaction in order to automatically perform a multi-hop swap.
CosmWasm Contract State
Our state requirements are minimal – as they should be with any on-chain system. Always store the minimal amount of information required for your application's immutability, auditability, reliability, or censorship-resistance needs.
Token balances are stored by the bank
module – even if we were allowing CW20 tokens, the balances for them are handled by their own contracts. We can simply query balances when needed, and we never need to store them.
Storing each user's share of the liquidity pool is a different matter. However, to avoid reinventing the wheel and breaking the pattern observed by DEXes and wallets the world over, we should handle this with an LP token, described just below.
The contract will hold some information about each pool in an array of nice PoolInfo
structs. Unlike on Solana, we won't be deriving pool addresses from pool information, so we must actively store this:
// state.rs
#[cw_serde]
pub struct PoolInfo {
pub denom_a: String,
pub denom_b: String,
pub lp_token_addr: Addr,
pub lp_token_code_id: u64,
}
While the code_id
used by a contract can be looked up on demand from env
, storing this value will aid in indexing, and in allowing multiple pools per asset pair but exactly one pair for an asset pair with a certain logic contract.
Note: The#[cw_serde]
Macrocw_serdemacro_rules! cw_serde { ($($tt:tt)*) => { /* macro expansion */ } }Click to view full documentation → macro ensures we derive the necessary items likeDebug
,PartialEq
,Serialize
, andDeserialize
, as well as a conversion to and from snake and camel case, so thatPoolInfo
Macrocw_serdemacro_rules! cw_serde { ($($tt:tt)*) => { /* macro expansion */ } }Click to view full documentation → can be sent in aspool_info
by JavaScript and CLI clients.
In the wild, each pool could have different fee settings, but here we'll keep fee
configuration global: one percentage fee for all pools.
This gives us a DEX contract configuration struct
as follows:
// state.rs
#[cw_serde]
pub struct Config {
pub default_lp_token_code_id: u64,
pub fee_numerator: u64,
pub fee_denominator: u64,
}
Floating point math isn't allowed in determinstic virtual machines; one workaround is to express fee as numerator
over denominator
.
When creating a new pool, our DEX contract will be instantiating a new LP contract, so it must know the code_id
of the stored on-chain code that new LP tokens are instantiated from. The user can pass in a custom code_id
– in the future, we might want a governance or admin-approved list of code IDs. If no custom ID is provided, the contract will use default_lp_token_code_id
.
Finally, our DEX contract needs to have a PendingPoolInfo
state struct. This is only meant to persist while the contract waits for a reply
StructReplyFrom crate: cosmwasm-stdClick to view documentation → from a newly instantiated LP token contract, so that the final PoolInfo
Macrocw_serdemacro_rules! cw_serde { ($($tt:tt)*) => { /* macro expansion */ } }Click to view full documentation → can be correctly stored with the new token contract address. (While addresses can be deterministic on CosmWasm via Instantiate2
, this is an excellent opportunity to demonstrate the use of reply
StructReplyFrom crate: cosmwasm-stdClick to view documentation →.)
CosmWasm Messages
Dex Factory Contract Messages
Our core ExecuteMsg
members for our DEX factory contract are:
/// Execute messages for the Factory contract.
#[cw_serde]
pub enum ExecuteMsg {
/// Create a new liquidity pool instance using a specific pool logic contract.
CreatePool {
denom_a: String,
denom_b: String,
pool_logic_code_id: u64,
},
/// Allows admin to register a new pool logic contract code ID.
RegisterPoolType { pool_logic_code_id: u64 },
/// Update admin.
UpdateAdmin { new_admin: Option<String> },
/// Update default LP token code ID.
UpdateDefaultLpCodeId { new_code_id: u64 },
}
This contract is only used to CreatePool
or to look up pool addresses; all of our other core functions live in the instantiated pool contract.
The created pool contract address can be looked up using a query:
/// Factory Query Messages
#[cw_serde]
#[derive(QueryResponses)]
pub enum QueryMsg {
/// Get the address of a specific pool instance.
#[returns(Addr)]
PoolAddress {
denom_a: String,
denom_b: String,
pool_logic_code_id: u64,
},
/// Get the factory configuration.
#[returns(Config)]
Config {},
}
Recent CosmWasm idiom derives QueryResponses
DeriveQueryResponsesFrom crate: cosmwasm-schemaClick to view documentation →; queries return Binary
StructBinarypub struct Binary(pub Vec<u8>);Click to view full documentation → by default, but we can easily specify the expected response type with attributes. Addr
StructAddrpub struct Addr(String);Click to view full documentation → already exists as a CosmWasm type and is perfectly suited for asking the PoolAddress
; our QueryMsg::Config
is just a convenience method that returns the Config
stored in state without any calculation beforehand.
We also have CosmWasm messages related to contract instantiation and migration (updating), plus the message format that the factory uses to create a new pool contract:
use cosmwasm_schema::{cw_serde, QueryResponses};
use cosmwasm_std::Addr;
use crate::state::Config;
/// Instantiate message for the Factory contract.
#[cw_serde]
pub struct InstantiateMsg {
pub default_pool_logic_code_id: u64,
pub admin: String,
}
#[cw_serde]
pub struct MigrateMsg {}
/// Message sent by the factory to instantiate a new pool logic contract.
#[cw_serde]
pub struct PoolContractInstantiateMsg {
pub denom_a: String,
pub denom_b: String,
pub lp_token_code_id: u64,
pub factory_addr: Addr,
}
Note: Queries in CosmWasm can perform rather extensive processing to formulate a response. They’re not restricted to just querying raw state. For example, a `SimulateSwap` query can provide a useful utility to client applications, including other smart contracts.
Pool Contract Messages
The pool contract itself handles our other core actions in its ExecuteMsg
variants:
#[cw_serde]
pub enum ExecuteMsg {
AddLiquidity {}, // Amounts derived from funds
Swap {
offer_denom: String, // Must match sent funds
min_receive: Uint128,
},
Receive(Cw20ReceiveMsg),
}
CosmWasm messages can have funds
attached; this is how tokens are provided to AddLiquidity
and Swap
. We don't need to specify all the assets involved, since each pool contract instance handles only one pair.
Adding a Cw20HookMsg
here allows us to specify what action we'll be performing when CW20 tokens (our LP tokens) are received by the contract directly:
// Hook message for receiving LP tokens
#[cw_serde]
pub enum Cw20HookMsg {
WithdrawLiquidity {},
}
Our QueryMsg
can return current information about that pool, easing the work that clients need to get this info. It would be nice to add SimulateSwap
, as well, but we'll put that into the backlog.
#[cw_serde]
#[derive(QueryResponses)]
pub enum QueryMsg {
#[returns(PoolStateResponse)]
PoolState {},
// Would be useful to add SimulateSwap query later
// #[returns(SimulateSwapResponse)],
// SimulateSwap { offer_amount: Uint128, offer_denom: String },
}
#[cw_serde]
pub struct PoolStateResponse {
pub denom_a: String,
pub denom_b: String,
pub reserve_a: Uint128,
pub reserve_b: Uint128,
pub total_lp_shares: Uint128,
pub lp_token_address: Addr,
}
Our InstantiateMsg
is the same format known to the factory contract as PoolContractInstantiateMsg
:
/// Message sent by the factory to instantiate this pool logic contract.
#[cw_serde]
pub struct InstantiateMsg {
pub denom_a: String,
pub denom_b: String,
pub lp_token_code_id: u64, // Code ID for the LP token this pool should use
pub factory_addr: String, // Address of the factory contract
// Potentially add fee info if pool controls fees
}
This message is what the DEX factory contrct will use to create pool contracts on demand.
We could add a MigrateMsg
, but leaving it out will prevent anything from ever updating our pool code.
Code Is Law maxis, rejoice! Your liquidity is safe. And if there’s a critical bug, it’s safe forever.
Solana Instructions
Note: The Solana code shown supports only SPL tokens, not native SOL. You can use wrapped SOL; some of the integration tests demonstrate this in action.
The instructions for our Solana DEX program are quite similar to our CosmWasm contract's ExecuteMsg
messages:
/// instruction.rs
use borsh::{
BorshSerialize,
BorshDeserialize
};
#[derive(BorshSerialize, BorshDeserialize, Debug)]
pub enum PoolInstruction {
/// InitializePool
/// Accounts (expected):
/// 0. [signer] payer
/// 1. [writable] pool state PDA (derived from sorted mints + plugin addresses)
/// 2. [writable] vault A
/// 3. [writable] vault B
/// 4. [writable] LP mint
/// 5. [read] token mint A
/// 6. [read] token mint B
/// 7. [read] plugin program (executable)
/// 8. [writable] plugin state
/// 9. [read] system_program
/// 10. [read] token_program
/// 11. [read] rent sysvar
InitializePool,
/// AddLiquidity { amount_a, amount_b }
/// Accounts:
/// 0. [signer] user
/// 1. [writable] pool state
/// 2. [writable] vault A
/// 3. [writable] vault B
/// 4. [writable] LP mint
/// 5. [writable] user token A
/// 6. [writable] user token B
/// 7. [writable] user LP
/// 8. [read] token_program
/// 9. [read] plugin program
/// 10.[writable] plugin state
AddLiquidity {
amount_a: u64,
amount_b: u64,
},
/// RemoveLiquidity { amount_lp }
/// Accounts:
/// 0. [signer] user
/// 1. [writable] pool state
/// 2. [writable] vault A
/// 3. [writable] vault B
/// 4. [writable] LP mint
/// 5. [writable] user token A
/// 6. [writable] user token B
/// 7. [writable] user LP
/// 8. [read] token_program
/// 9. [read] plugin program
/// 10.[writable] plugin state
RemoveLiquidity {
amount_lp: u64,
},
/// Swap { amount_in, min_out }
/// Accounts:
/// 0. [signer] user
/// 1. [writable] pool state
/// 2. [writable] vault A
/// 3. [writable] vault B
/// 4. [writable] user src token
/// 5. [writable] user dst token
/// 6. [read] token_program
/// 7. [read] plugin program
/// 8. [writable] plugin state
Swap {
amount_in: u64,
min_out: u64,
},
}
Notice the comment convention seen in Solana programs: the AccountInfo
StructAccountInfopub struct AccountInfo<'a> {
pub key: &'a Pubkey,
pub is_signer: bool,
pub is_writable: bool,
pub lamports: Rc<RefCell<&'a mut u64>>,
pub data: Rc<RefCell<&'a mut [u8]>>,
pub owner: &'a Pubkey,
pub executable: bool,
pub rent_epoch: u64,
}Click to view full documentation → array that's passed in to instruction handlers isn't always easy to keep track of, so for convenience, we list out the accounts. These are not validated in any way. They also don't necessarily represent the incoming accounts array packed in the transaction – rather, they represent what the internal processors
can expect.
Unlike CosmWasm messages, here we don't include every piece of information as a parameter in our main struct. For example, the source token (denom_a
) and output token (denom_b
) are not included in the instruction parameters.
After all, any accounts that will be read from or written too must be included in the
*accounts*
array, so any code that needs them can read them from that array.
Solana AccountInfo
Here's some more information on each account item:
user
(EOA, signer). The user's accountPubkey
StructPubkeypub struct Pubkey(/* private fields */)Click to view full documentation →. This is the main address the user sees in their wallet, and the public key from the keypair they sign the transaction with. If you're coming from any other blockchain platform, you may expect this to bewritable
, but it is not: balances are other stored information are held in specific accounts, not here.
Note:user
is our onlysigner
, even though the program will sometimes “sign” for us. This is common, although there are certainly cases where moresigner
accounts may be included – such as when a different account is thepayer
for transaction fees.
vault A
andvault B
(ATAs). Unlike in our CosmWasm example, our programs cannot hold balances directly "in their addresses," sovault A
andvault B
accounts exist for this purpose. They're controlled by the program, and can be "signed for" during transactions by the program, allowing the program to send tokens from these accounts. Since all of our instructions involve depositing to or withdrawing from both vaults, thus changing their balances, these must always be writable.pool state PDA
. Similarly, Solana programs don't have access to "local state." However, they can create accounts to store information. (This is allvault A
andvault B
really are, too – places where token balance information is stored.) Again, this must be writable for all of our instructions.LP mint
. As on CosmWasm, our LP shares are represented by tokens using a standard Token program. This is the mint account – what in CosmWasm would be called the "token contract", although unlike CosmWasm, balances are not stored here but are stored in the ATA accounts following. This LP mint is controlled by the DEX program (themint_authority
) so that shares can be issued to depositing users. It too must be writable.user token A
anduser token B
(ATAs)– calleduser src token
anduser dst token
for clarity in theSwap
instruction comment. These must be writable for most instructions, since the user's balances are changing whether they are providing liquidity (from their account balances), withdrawing liquidity (to their account balances), or swapping (between their account balances). ForInitializePool
these do not need to be writable – in fact, they do not need to exist at all, since the user is neither sending nor receiving either token. We could implement some "starting liquidity" inInitializePool
, in which case these user token accounts would need to be both present and writable.user LP
: the ATA (token account) holding the user's balance of the LP token.token program
: always required, since all of the tokens we are working with – LP and source/destination tokens – are owned by the token program. We will call this program'stransfer
instruction in many places.plugin program
: also always required, since this is the place our LP logic is implemented. In our examples, it will always be our Constant Product Plugin, but it could be some other program in order to access a pool with different logic.plugin state
: this contains pool state information.system program
: required whenever we make a System Program call, such as creating an account or transferring lamports (SOL).
Solana Pool Plugin Messages
Our second Solana program, constant_product_plugin
, is architected differently from the CosmWasm LP plugin. It exists only to provide math functions, so that the main DEX program can call it and receive answers about how to proceed:
#[derive(BorshSerialize, BorshDeserialize, Debug)]
pub enum PluginInstruction {
ComputeAddLiquidity {
reserve_a: u64,
reserve_b: u64,
deposit_a: u64,
deposit_b: u64,
total_lp_supply: u64,
},
ComputeRemoveLiquidity {
reserve_a: u64,
reserve_b: u64,
total_lp_supply: u64,
lp_amount_burning: u64,
},
ComputeSwap {
reserve_in: u64,
reserve_out: u64,
amount_in: u64,
},
}
Unlike in CosmWasm, these instructions won't reply to our main program. Instead, we store the computed results for access later:
#[derive(BorshDeserialize, BorshSerialize, Debug, Default)]
pub struct PluginCalcResult {
pub actual_a: u64,
pub actual_b: u64,
pub shares_to_mint: u64,
pub withdraw_a: u64,
pub withdraw_b: u64,
pub amount_out: u64,
}
We could devise a more efficient storage system that doesn't lock up so much space per pool, but it would be more complex and provide only marginal savings, so let's add that to the backlog. This format will work for our needs.
Once we have figured out the contracts/programs, state and accounts, and messages/instructions, our main architectural work is done.
Now, we implement. Then, we test.
CosmWasm "Create Pool" Execute Handler
Let's start with a look at our handling code for creating a new liquidity pool.
This is the only one of our core functions that involves both contracts:
CreatePool
is called on the DEX Factory contract.- The DEX Factory contract calls
instantiate
, creating a new pool-specific instance of the Pool Constant Product contract. - The DEX Factory handles the
reply
StructreplyFrom crate: cosmwasm-stdClick to view documentation → of the successful instantiation, allowing it to store the pool contract's address in its registry for easy lookup.
When an execute message is received by our CosmWasm contract, it hits the execute
entry point – unlike Solana programs, CosmWasm contracts have several entry points.
Our execute()
function then matches the specific message:
#[entry_point]
pub fn execute(
deps: DepsMut,
env: Env,
info: MessageInfo,
msg: ExecuteMsg,
) -> Result<Response, ContractError> {
match msg {
ExecuteMsg::CreatePool {
denom_a,
denom_b,
pool_logic_code_id,
} => execute_create_pool(deps, env, info, denom_a, denom_b, pool_logic_code_id),
// ... other match arms
}
}
This function returns a Result<<!--DOCSLINK11-->, ContractError>
, where ContractError
is our own custom enum of errors held in error.rs. The simplest pattern is to have the handlers, likewise, return this same type. So, we pass execution off to execute_create_pool
.
We give this handler access to deps
(storage and API functions), env
(block and contract information), and info
(message sender and funds information), as well as the parameters that were included with the message: denom_a
and denom_b
, which are String
items specifying the new pool's assets, as well as pool_logic_code_id
, which can override our default constant product code so that pools can be created with custom logic.
In order to create a custom pool, then, the deployer would first compile and upload (store) WASM on the blockchain with its instantiate interface, at least, matching our PoolContractInstantiateMsg
. Then, anyone could call CreatePool
and specify the new code id.
On to our handler. We'll use a composite pool_key
as our key that specifies a location in the contract's hash map state storage. If we make this key up with the pool denoms
and the code_id
, it will allow us to enforce "one pool per code ID and denom pair." We can then validate this and other items:
- No identical denoms: a pool cannot swap from an asset to itself
- Pool doesn't already exist
- Pool isn't pending creation
- Funds weren't included (we could allow initial funds to be sent in, but let's add that to the backlog)
If this validation passes, the PoolContractInstantiateMsg
is formed and we add it as a submessage to our returned Response
StructResponsepub struct Response<T = Empty> { /* private fields */ }Click to view full documentation →, using .add_submessage()
. This results in the virtual machine executing this submessage after our current message is complete.
pub(crate) fn execute_create_pool(
deps: DepsMut,
env: Env,
info: MessageInfo,
denom_a: String,
denom_b: String,
pool_logic_code_id: u64,
) -> Result<Response, ContractError> {
if denom_a == denom_b {
return Err(ContractError::IdenticalDenoms {});
}
let pool_key_denoms = get_ordered_denoms_state(denom_a.clone(), denom_b.clone());
let cfg = CONFIG.load(deps.storage)?;
let pool_key = (
pool_key_denoms.0.clone(),
pool_key_denoms.1.clone(),
pool_logic_code_id,
);
if POOLS.may_load(deps.storage, pool_key.clone())?.is_some() {
return Err(ContractError::PoolAlreadyExists {
denom1: pool_key.0,
denom2: pool_key.1,
});
}
if PENDING_POOL_INSTANCE.may_load(deps.storage)?.is_some() {
return Err(ContractError::PoolCreationPending {});
}
if !info.funds.is_empty() {
return Err(ContractError::FundsSentOnCreatePool {});
}
let instantiate_pool_msg = PoolContractInstantiateMsg {
denom_a: pool_key.0.clone(),
denom_b: pool_key.1.clone(),
lp_token_code_id: cfg.default_pool_logic_code_id,
factory_addr: env.contract.address.clone(),
};
let submsg = SubMsg::reply_on_success(
WasmMsg::Instantiate {
admin: Some(env.contract.address.to_string()),
code_id: pool_logic_code_id,
msg: to_json_binary(&instantiate_pool_msg)?,
funds: vec![],
label: format!(
"DEX Pool-{}-{} (Logic {})",
pool_key.0, pool_key.1, pool_logic_code_id
),
},
INSTANTIATE_POOL_REPLY_ID,
);
PENDING_POOL_INSTANCE.save(deps.storage, &pool_key)?;
Ok(Response::new()
.add_submessage(submsg)
.add_attribute("action", "create_pool_instance")
.add_attribute("pool_logic_code_id", pool_logic_code_id.to_string())
.add_attribute("denom_a", pool_key.0)
.add_attribute("denom_b", pool_key.1))
}
The submessage uses SubMsg::reply_on_success
, so we'll receive a reply
StructreplyFrom crate: cosmwasm-stdClick to view documentation → once it's complete. We'll look at the reply handling at the end of this section.
The attribute
items are just for general information: these will be visible all the way through to the block explorer's display of the transaction.
Control now moves away from our DEX Factory contract. Its job was to validate the CreatePool
request, create the correct PoolContractInstantiateMsg
, and attach it to its Response
StructResponsepub struct Response<T = Empty> { /* private fields */ }Click to view full documentation →. Now, the instantiate
handler of our Pool Contract will be triggered:
#[entry_point]
pub fn instantiate(
deps: DepsMut,
env: Env,
_info: MessageInfo,
msg: InstantiateMsg,
) -> Result<Response, ContractError> {
crate::execute::execute_instantiate(deps, env, _info, msg)
}
Nothing very special here – just passing on to execute_instantiate
. Unlike our execute()
handlers, this one doesn't need to even do any matching. Notice how we pass along our InstantiateMsg
– that's what our DEX Factory contract formulated earlier. Here's the handler:
pub(crate) fn execute_instantiate(
deps: DepsMut,
env: Env,
_info: MessageInfo,
msg: InstantiateMsg,
) -> Result<Response, ContractError> {
let factory_addr = deps.api.addr_validate(&msg.factory_addr)?;
let (denom_a, denom_b) = {
if msg.denom_a < msg.denom_b {
(msg.denom_a.clone(), msg.denom_b.clone())
} else {
(msg.denom_b.clone(), msg.denom_a.clone())
}
};
RESERVE_A.save(deps.storage, &Uint128::zero())?;
RESERVE_B.save(deps.storage, &Uint128::zero())?;
let sub_msg = create_lp_instantiate_submsg(msg.lp_token_code_id, &env, &denom_a, &denom_b)?;
let cfg = PoolConfig {
factory_addr,
denom_a: denom_a.clone(),
denom_b: denom_b.clone(),
lp_token_addr: Addr::unchecked(""),
};
POOL_CONFIG.save(deps.storage, &cfg)?;
cw2::set_contract_version(deps.storage, CONTRACT_NAME, CONTRACT_VERSION)?;
Ok(Response::new()
.add_submessage(sub_msg)
.add_attribute("action", "instantiate_pool_contract")
.add_attribute("factory", msg.factory_addr)
.add_attribute("denom_a", denom_a)
.add_attribute("denom_b", denom_b)
.add_attribute("lp_token_code_id", msg.lp_token_code_id.to_string()))
}
Another submessage
! This will instantiate the actual LP token contract. First, the denoms
are sorted to avoid errors in some cases later. Then, we save all our pool config to our pool contract's state. But the pool will be the minter of its very own CW20 token, representing LP shares, so it needs to instantiate that:
pub(crate) fn create_lp_instantiate_submsg(
lp_token_code_id: u64,
env: &Env,
denom1: &str,
denom2: &str,
) -> StdResult<SubMsg> {
let token_name = format!("{}-{} LP", denom1, denom2);
let token_symbol = format!(
"LP-{}{}",
denom1.chars().next().unwrap_or('X'),
denom2.chars().next().unwrap_or('Y')
)
.to_uppercase();
let decimals = 6u8;
let lp_instantiate_msg = cw20_base::msg::InstantiateMsg {
name: token_name.clone(),
symbol: token_symbol.clone(),
decimals,
initial_balances: vec![],
mint: Some(MinterResponse {
minter: env.contract.address.to_string(),
cap: None,
}),
marketing: None,
};
let submsg = WasmMsg::Instantiate {
admin: Some(env.contract.address.to_string()),
code_id: lp_token_code_id,
msg: to_json_binary(&lp_instantiate_msg)?,
funds: vec![],
label: format!("DEX LP {}-{}", denom1, denom2),
};
Ok(SubMsg::reply_on_success(submsg, INSTANTIATE_LP_REPLY_ID))
}
We have another SubMsg::reply_on_success
! So we'll have to look at not just one reply handler, but two.
Here's how our Pool Contract handles the reply it gets when the LP Token Contract instantiates. There's a little bit of boilerplate here, but in short, it needs to safe the new token contract address to its own POOL_CONFIG
so that it can be aware of it. Otherwise, it won't be able to govern it later.
use cw_utils::parse_instantiate_response_data;
use crate::state::{INSTANTIATE_LP_REPLY_ID, POOL_CONFIG};
pub fn handle_lp_instantiate_reply(deps: DepsMut, msg: Reply) -> Result<Response, ContractError> {
if msg.id != INSTANTIATE_LP_REPLY_ID {
return Err(ContractError::UnknownReplyId { id: msg.id });
}
let result = msg.result.into_result().map_err(StdError::generic_err)?;
#[allow(deprecated)]
let data = result.data.ok_or(ContractError::MissingReplyData {})?;
let res = parse_instantiate_response_data(&data)?;
println!(
"[reply] Received contract_address in reply data: {}",
res.contract_address
);
#[cfg(not(test))]
let lp_token_addr = deps.api.addr_validate(&res.contract_address)?;
#[cfg(test)]
let lp_token_addr = Addr::unchecked(&res.contract_address);
// Update config with the LP token address
POOL_CONFIG.update(deps.storage, |mut cfg| -> StdResult<_> {
// Safety check: ensure lp_token_addr is not already set
// This prevents potential issues if reply is somehow triggered twice
if cfg.lp_token_addr != Addr::unchecked("") {
return Err(StdError::generic_err("LP token address already set"));
}
cfg.lp_token_addr = lp_token_addr.clone();
Ok(cfg)
})?;
Ok(Response::new()
.add_attribute("action", "lp_token_instantiated")
.add_attribute("lp_token_address", lp_token_addr))
}
The msg.id
here is simply a number indicating the identity of this reply. If our contract were more complicated, it could be instantiating multiple kinds of other contracts and generally interacting with still more. Internal constants or enums of u64
values help keep everything organized, so that the reply message's origin can be quickly identified.
As far as the content of Reply
StructReplyFrom crate: cosmwasm-stdClick to view documentation →-type messages, CosmWasm is moving away from data
to msg_responses
. However, data
is still included, and chains running older versions of CosmWasm (before 2.x) must use data
, so we still use it here. The data is parsed out to get, in this case, its contract_address
member.
Our reply handler in our DEX Factory contract is similar, and it also only exists in order to save some information in its state:
pub fn handle_lp_instantiate_reply(deps: DepsMut, msg: Reply) -> Result<Response, ContractError> {
if msg.id != INSTANTIATE_POOL_REPLY_ID {
return Err(ContractError::UnknownReplyId { id: msg.id });
}
let result = msg.result.into_result().map_err(StdError::generic_err)?;
#[allow(deprecated)]
let data = result.data.ok_or(ContractError::MissingReplyData {})?;
let res = parse_instantiate_response_data(&data)?;
let pool_contract_addr = deps.api.addr_validate(&res.contract_address)?;
let pool_key = PENDING_POOL_INSTANCE.load(deps.storage)?;
POOLS.save(deps.storage, pool_key.clone(), &pool_contract_addr)?;
PENDING_POOL_INSTANCE.remove(deps.storage);
Ok(Response::new()
.add_attribute("action", "pool_instance_created")
.add_attribute("pool_contract_address", pool_contract_addr.to_string())
.add_attribute("denom_a", pool_key.0)
.add_attribute("denom_b", pool_key.1)
.add_attribute("pool_logic_code_id", pool_key.2.to_string()))
}
This is the final Response
StructResponsepub struct Response<T = Empty> { /* private fields */ }Click to view full documentation → in our CreatePool
flow.
Now, calling the CreatePool
execute message will instantiate two new contracts and give us the liquidity pool address we need for our other actions. If we don't catch it here, we can always query the Dex Factory contract with QueryMsg::PoolAddress
and ask for it.
Solana "Create Pool" Instruction Processor
Despite the general architectural similarities, the implementation of InitializePool
is quite different.
Although I'd like to use the same name, I picked different names between Solana and CosmWasm core functions for clarity.
InitializePool
in the Solana DEX fulfills the same purpose asCreatePool
in the CosmWasm DEX.
Solana programs have just one entry point, and here's ours for the dex-pool-program
:
use solana_program::entrypoint;
use solana_program::{account_info::AccountInfo, entrypoint::ProgramResult, pubkey::Pubkey};
use crate::processor::Processor;
entrypoint!(process_instruction);
fn process_instruction(
program_id: &Pubkey,
accounts: &[AccountInfo],
instruction_data: &[u8],
) -> ProgramResult {
Processor::process(program_id, accounts, instruction_data)
}
Of course, much like our CosmWasm execute
entry point, this does very little. It doesn't even match – that's deferred to our Processor
class's process
function:
pub struct Processor;
impl Processor {
pub fn process(
program_id: &Pubkey,
accounts: &[AccountInfo],
instr_data: &[u8],
) -> ProgramResult {
let instruction = PoolInstruction::try_from_slice(instr_data)
.map_err(|_| PoolError::InvalidInstructionData)?;
match instruction {
PoolInstruction::InitializePool =>
Self::process_initialize_pool(program_id, accounts),
// ... other match arms here
}
}
// ...
}
This parsing is required because, unlike CosmWasm's default entry points with typed messages which derive Deserialize
, the Solana entry point takes binary instruction data and then must handle parsing of it.
We could have used something other than
serde::Deserialize
in CosmWasm, and some people usebincode
, butserde
is standard practice.
Once the instruction
is parsed, it can be matched, and then passed off to its handler function (or "processor"). In this case, that's process_initialize_pool
.
You'll see some similarities here to the CosmWasm implementation. We sort the asset addresses (here these are called mint
addresses, not denoms
). And while we don't create a lookup key based on the pool information, instead we directly derive the pool PDA from this information – giving us the same result of "only one pool with any given asset pair and pool type".
We also create the PoolState
account if it doesn't exist. On Solana, on-chain creation of an address is not just simple calculation of it, such as on CosmWasm – the rent exemption amount must be paid. You can think of this as an additional category of fee that is covered by the user (locally called the payer_acc
).
We could also create other accounts like vault_a_acc
if they didn't exist, but to keep the implementation shorter and clearer, I've required that the client do this ahead of time. However, we will still need to validate that these are the proper PDAs – if the program can't send from these vault accounts, it won't function correctly.
We then write the data we want to store in PoolState
directly into the account, using copy_from_slice
.
Note that this version has no validation! We will add it in the next section of this article.
fn process_initialize_pool(program_id: &Pubkey, accounts: &[AccountInfo]) -> ProgramResult {
msg!("Pool: process_initialize_pool entry");
let acc_iter = &mut accounts.iter();
let payer_acc = next_account_info(acc_iter)?; // 0
let pool_state_acc = next_account_info(acc_iter)?; // 1
let vault_a_acc = next_account_info(acc_iter)?; // 2
let vault_b_acc = next_account_info(acc_iter)?; // 3
let lp_mint_acc = next_account_info(acc_iter)?; // 4
let mint_a_acc = next_account_info(acc_iter)?; // 5
let mint_b_acc = next_account_info(acc_iter)?; // 6
let plugin_prog_acc = next_account_info(acc_iter)?; // 7
let plugin_state_acc = next_account_info(acc_iter)?; // 8
let system_acc = next_account_info(acc_iter)?; // 9
let rent_acc = next_account_info(acc_iter)?; // 10
let rent = Rent::from_account_info(rent_acc)?;
// Sort the mint addresses
let (sort_mint_a, sort_mint_b) = if mint_a_acc.key < mint_b_acc.key {
(mint_a_acc.key, mint_b_acc.key)
} else {
(mint_b_acc.key, mint_a_acc.key)
};
// Construct initial PoolState data to get its serialized size
let initial_pool_data = PoolState {
token_mint_a: *mint_a_acc.key,
token_mint_b: *mint_b_acc.key,
vault_a: *vault_a_acc.key,
vault_b: *vault_b_acc.key,
lp_mint: *lp_mint_acc.key,
total_lp_supply: 0,
bump,
plugin_program_id: *plugin_prog_acc.key,
plugin_state_pubkey: *plugin_state_acc.key,
};
let pool_data_bytes = initial_pool_data.try_to_vec()?;
let pool_space = pool_data_bytes.len(); // Use serialized length
let needed_lamports = rent.minimum_balance(pool_space);
msg!(
" Space (serialized): {}, Lamports: {}",
pool_space,
needed_lamports
);
invoke_signed(
&system_instruction::create_account(
payer_acc.key,
pool_state_acc.key,
needed_lamports,
pool_space as u64, // Use serialized size
program_id, // Owner is self
),
// Accounts for the CPI call itself
&[
payer_acc.clone(),
pool_state_acc.clone(), // The account being created
system_acc.clone(),
],
// Seeds for signing as the PDA
&[&[
b"pool",
sort_mint_a.as_ref(),
sort_mint_b.as_ref(),
plugin_prog_acc.key.as_ref(),
plugin_state_acc.key.as_ref(),
&[bump],
]],
)?;
msg!("Pool: invoke_signed successful.");
let mut account_data_borrow = pool_state_acc.data.borrow_mut();
account_data_borrow.copy_from_slice(&pool_data_bytes);
Ok(())
}
Now, this function is much too monolithic for my taste – I'd rather have nice, small and clearly-named helper functions – but passing around AccountInfo
StructAccountInfopub struct AccountInfo<'a> {
pub key: &'a Pubkey,
pub is_signer: bool,
pub is_writable: bool,
pub lamports: Rc<RefCell<&'a mut u64>>,
pub data: Rc<RefCell<&'a mut [u8]>>,
pub owner: &'a Pubkey,
pub executable: bool,
pub rent_epoch: u64,
}Click to view full documentation → items can introduce lifetime requirements that I'd prefer not to burden you with.
That said, this is a less complex affair than our CosmWasm implementation. One consequence, though, is that the client must do more work in order to prepare and pass in the correct accounts – you can see all of this work in the liteSVM tests (in /tests/tests/integration.rs), and we'll explore it directly in the next article.
Despite the client's zeal in passing in ready accounts, though, the program must ensure that the client is not trying to cheat. For example, the client could try to create a pool account with vault
accounts that aren't actually controlled by the pool program but are controlled by the client – an attempt to have control over any deposited liquidity funds.
So, we must ensure that all of the relevant accounts are correct and valid. For this, I will happily use little helper functions.
Here's our required validation, which we'll do in basically reverse order. The payer_acc
validation is overkill, but it serves to demonstrate is_signer()
:
// – Initial Validations –
msg!("Pool Init: Validating accounts...");
// 0. Payer must sign
if !payer_acc.is_signer {
msg!("Payer did not sign");
return Err(PoolError::MissingRequiredSignature.into());
}
// 9. System Program ID
validate_program_id(system_acc, &solana_program::system_program::id())?;
// 10. Rent Sysvar ID
validate_program_id(rent_acc, &solana_program::sysvar::rent::id())?;
let rent = Rent::from_account_info(rent_acc)?;
// 11. Token Program ID
validate_program_id(token_prog_acc, &spl_token::id())?;
// 7. Plugin Program Account (Executable? Owned by Loader?)
validate_executable(plugin_prog_acc)?;
// 8. Plugin State Account (Rent-exempt?)
validate_rent_exemption(plugin_state_acc, &rent)?;
// 5 & 6: Mint A & B must be different
if mint_a_acc.key == mint_b_acc.key {
msg!("Mint A and Mint B cannot be the same");
return Err(PoolError::MintsMustBeDifferent.into());
}
// – PDA Derivation & Validation –
msg!("Pool Init: Deriving pool PDA...");
// Sort the mint addresses
let (sort_mint_a, sort_mint_b) = if mint_a_acc.key < mint_b_acc.key {
(mint_a_acc.key, mint_b_acc.key)
} else {
(mint_b_acc.key, mint_a_acc.key)
};
// Derive the pool PDA
let seeds = &[
b"pool",
sort_mint_a.as_ref(),
sort_mint_b.as_ref(),
plugin_prog_acc.key.as_ref(),
plugin_state_acc.key.as_ref(),
];
let (expected_pool_pda, bump) = Pubkey::find_program_address(seeds, program_id);
if &expected_pool_pda != pool_state_acc.key {
msg!(
"Pool ERROR: Expected pool pda {}, got {}",
expected_pool_pda,
pool_state_acc.key
);
return Err(PoolError::IncorrectPoolPDA.into());
}
// – Mint & Vault Validations (using PDA and Rent) –
msg!("Pool Init: Validating Mints and Vaults...");
// 5. Mint A (Generic Mint Checks)
validate_generic_mint(mint_a_acc, &rent)?;
// 6. Mint B (Generic Mint Checks)
validate_generic_mint(mint_b_acc, &rent)?;
// 4. LP Mint (Specific LP Mint Checks)
validate_lp_mint(lp_mint_acc, &expected_pool_pda, &rent)?;
// 2. Vault A
validate_vault_account(vault_a_acc, &expected_pool_pda, mint_a_acc.key, &rent)?;
// 3. Vault B
validate_vault_account(vault_b_acc, &expected_pool_pda, mint_b_acc.key, &rent)?;
msg!("Pool Init: All account validations passed.");
Using helper functions here helps us keep things more readable. I don't need to give you all of them – you can see pda.rs in the codebase – but let's look at two.
pub fn validate_vault_account(
vault_info: &AccountInfo,
expected_owner_pda: &Pubkey,
expected_mint: &Pubkey,
rent: &Rent,
) -> Result<(), ProgramError> {
// – Check 1: Is the vault account key the correct derived ATA? –
let expected_vault_ata = get_associated_token_address(expected_owner_pda, expected_mint);
if vault_info.key != &expected_vault_ata {
msg!(
"Vault ATA Error: Expected {}, got {}",
expected_vault_ata,
vault_info.key
);
return Err(PoolError::IncorrectVaultATA.into());
}
// – Check 2: Ownership by Token Program –
if vault_info.owner != &TOKEN_PROGRAM_ID {
msg!(
"Vault Error: Account {} owned by {}, expected {}",
vault_info.key,
vault_info.owner,
TOKEN_PROGRAM_ID
);
return Err(PoolError::InvalidAccountData.into());
}
// Check rent exemption is overkill, but if you wanted too...
// validate_rent_exemption(vault_info, rent)?;
// Unpack token account data
let token_account_data = TokenAccount::unpack(&vault_info.data.borrow())
.map_err(|_| PoolError::UnpackAccountFailed)?;
// Check if initialized (state check)
if token_account_data.state != AccountState::Initialized {
msg!("Vault Error: Account {} is not initialized", vault_info.key);
return Err(PoolError::InvalidAccountData.into());
}
// Check owner field inside the token account data
if &token_account_data.owner != expected_owner_pda {
msg!("Vault Error: Account {} owner {} does not match expected PDA {}", vault_info.key, token_account_data.owner, expected_owner_pda);
return Err(PoolError::InvalidVaultOwner.into());
}
// Check mint
if &token_account_data.mint != expected_mint {
msg!("Vault Error: Account {} mint {} does not match expected mint {}", vault_info.key, token_account_data.mint, expected_mint);
return Err(PoolError::TokenMintMismatch.into());
}
Ok(())
}
Here's a key practical difference: Solana programs ask for more computation off chain in order to support their storage account model and their storage and computation limits. CosmWasm doesn't allow for parallelization, but it can more easily accomplish complex actions on-chain and has a more intuitive storage solution, so the client side is easier.
Anyway, in order for this process to function, it must have addresses like that of our constant_product_plugin
program passed in to the instruction. Meaning that before we even call InitializePool
, we must have deployed the program. Unlike CosmWasm, however, this is a one-time action: we deploy the program, and then it handles logic for ALL pools of that type. On CosmWasm, even though the same code is handling all pools, each individual pool has a different instance of that code (and thus a different address).
Meaning, on CosmWasm, each pool has its own smart contract address.
On Solana, each pool has its own storage account addresses, but the pool program address itself is the same for ALL pools of the same type.
Now, what about validation in the constant product plugin program?
We have two possible approaches here: also validate our accounts in the plugin program, or only accept calls from our DEX program, making it the location where all other validation occurs. This way, the constant_product_plugin
knows that incoming accounts have been pre-validated.
In order to do this, our constant_product_plugin
must be made aware of the DEX program's program ID. But this way, we eliminate the possibility that the pool will be called some other way.
In the next article, we'll finish the implementation. Then, we write thorough tests – including multi-contract tests.
Through this example, we began to see how an application with multiple contracts/programs is put together on each platform. CosmWasm favors message passing and storage, whereas Solana/Anchor uses accounts and explicit invocations. Yet the high-level outcome is similar.
In the meantime, you can explore the complete code in the Dopple Dex repo:
**GitHub - rustopian/dopple-dex: Modular AMM DEX implemented in both CosmWasm and Solana.