Summary of the common smart contracts vulnerabilities - Nethemba

BLOG

Summary of the common smart contracts vulnerabilities

Introduction

Cryptocurrencies and blockchain technology gained a lot of attention in the last year due to increasing mainstream adoption and new use cases. Thanks to hundreds of completed ICOs launched on Ethereum platform, Solidity is one of the most popular languages for smart contracts development. With billions of dollars at play and relatively low-level of smart contract security enlightenment, smart contracts written in Solidity have been successfully exploited by a malicious user, and hundreds of millions worth of crypto funds have been stolen.

The goal of this article is to highlight frequent security vulnerabilities of contracts written in Solidity language and explain ways how to identify and mitigate them.

 

Dangerous assumptions

Here is a list of assumptions one smart contract developer should NOT make to stay out of trouble.

 

“No one can send funds to my contract unless I allow them to”

In Solidity, for a contract to be able to receive funds, at least one function has to be marked as ‘payable’. However, there are two special ways how someone can “force send” funds to your contract even if none of the functions in your contract is marked as ‘payable’:

Anyone can call selfdestruct(<your_contract_addreess>) in their contract and send the balance of their contract to your contract

  • Mining reward

Miner can set the address of your contract as mining reward address, and when this miner successfully mines a block, then your contract will be credited with mining reward.

Lesson learned: Never assume that balance of your contract is 0 because anybody can send funds to your contract even if you haven’t explicitly allowed your contract to receive funds.

How to mitigate: This characteristic of Ethereum smart contracts cannot be avoided.

 

“No one can read the value of my private variable”

Variable in Solidity is either public or private. Value of public variable is accessible for any contract via getter function automatically created by the compiler. Making variable as private prevents other contracts from accessing and modifying it, but a private variable is stored on the blockchain, so it is visible to everyone.

Let’s use following contract to demonstrate how to read a value of a private variable.

pragma solidity ^0.4.22;

contract GuessingGame {
 address public winner;
 int private secretNumber;

 constructor(int _secretNumber) public {
   secretNumber = _secretNumber;
 }

 function guess(int _secretNumber) public {
   require(winner == address(0));
   require(secretNumber == _secretNumber);
   winner = msg.sender;
 }}

If you know the address of contract’s instance, then the value of secretNumber can be easily read via web3.js library.

// create callback to display returned value
var callback = function(error, result){
   if(!error)
       window.alert(result);
   else
       window.alert(error);
};
// read value from contract’s storage
web3.eth.getStorageAt(‚<contract address>‘, 1, callback);

Web3.js getStorageAt command reads the value from contract’s storage and returns a hexadecimal representation of the variable’s value. It is necessary to convert the returned value from hex to decimal representation to get the integer value of secretNumber.

Note that value of the second parameter of the getStorageAt function is 1 because we are interested in the value of second state variable of GuessingGame contract instance. To retrieve the value of winner(first state variable) via getStorageAt, we would call it with 0 as a value of the second parameter. Here you can read more about How to read Ethereum contract storage.

Lesson learned: All information stored on blockchain is publically visible including all contract state variables.

How to mitigate: One way how to overcome described limitation/feature of blockchain is to store hashed or encrypted secretNumber instead of the plain integer. An example of how to “hide” information on blockchain can be found here.

 

“tx.origin is the same as msg.sender”

There are two variables – tx.origin and msg.sender – in Solidity contract’s global namespace which look very similar, but interchanging them might lead to a severe security vulnerability. Tx.origin returns address which initiated the current transaction. On the other hand msg.sender returns address which originated current message call.

Following contracts demonstrate the difference between transaction and message call.

pragma solidity ^0.4.22;

contract A {
 function functionA(address _otherContractAddress) public {
   B contractB = B(_otherContractAddress);
   contractB.functionB();
 }
}

contract B {
 function functionB() public {
   // do something
 }
}

There are 3 different addresses relevant for our example:

  • <address1> – address which executes functionA on instance of contract A
  • <address2> – address of contract A instance
  • <address3> – address of contract B instance

 

When <address1> calls functionA on the instance of contract A with _otherContractAddress parameter value equal to <address3>, then in functionA the value of tx.origin and msg.sender will be the same and equal to <address1>. Then when functionA calls functionB on the instance of contract B then tx.origin will be <address1> and msg.sender will be <address2>.

Let’s take a look at how authorization via tx.origin can be exploited on the simplified version of a standard token contract.

pragma solidity ^0.4.22;

contract Token {

 mapping(address => uint) balances;

 function transfer(address _to, uint _value) public {
   // checks to make sure that tx.origin has enough tokens
   // …

   balances[tx.origin] -= _value;
   balances[_to] += _value;
 }
}

 

The following contract can be used to steal victim’s tokens.

 

pragma solidity ^0.4.22;

interface Token {
   function transfer(address _to, uint _value) external;
}

contract MaliciousContract {

 address attackerAddress;
 Token contractToAttack;

 constructor(address _contractToAttack) public {

     contractToAttack = Token(_contractToAttack);
     attackerAddress = msg.sender;

 }

 // fallback function
 function () public payable {
contractToAttack.transfer(attackerAddress, 10000);

 }
}

If an attacker tricks the victim to send some funds to address of MaliciousContract instance,  then the fallback function of MaliciousContract will call transfer function of Token contract instance and will transfer victim’s tokens to attacker’s address. In transfer function tx.origin will be the victim’s address, because the victim initiated this transaction by sending funds to MaliciousContract instance, while msg.sender will be the malicious contract’s address.

Lesson learned: tx.origin is not always the same as msg.sender.

How to mitigate: Never use tx.origin for authorization.

 

“Sending funds is always successful”

Solidity offers following functions to send funds:

  • transfer – throws on failure, forwards 2300 gas stipend, not adjustable
  • send – returns false on failure, forwards 2300 gas stipend, not adjustable
  • call – returns false on failure, forwards all available gas, adjustable

The difference between these listed functions is how they behave on failure and how much gas they forward.

Let’s use the following contract to illustrate why it can be dangerous to assume that sending funds is always successful. The following contract is a simplified version of an auction contract.

pragma solidity ^0.4.22;

contract Auction {
   uint public highestBid;
   address public highestBidder;

   function bid() public payable {
       require(msg.value > highestBid);

       if (highestBid > 0) {
           // send money back to current highest bidder
           require(highestBidder.send(highestBid));
       }

       // store new highest bidder
       highestBidder = msg.sender;
       highestBid = msg.value;
   }
}

This contract would work correctly if only EOAs (Externally Owned Accounts) are bidding, but any malicious contracts can prevent other people from bidding by intentionally refusing incoming funds. There are a few ways that contracts can refuse funds:

  • revert() or throw (via require() or assert()) in fallback function
  • intentionally run out of gas in fallback function
  • Omitting payable keyword for fallback function  

If a malicious contract is currently the highest bidder, then no one else is able to place their bid, because assert(highestBidder.send(highestBid)) always throws and this way the malicious contract would ensure that it wins this auction.

Here is an example how a malicious contract may look:

pragma solidity ^0.4.22;

interface Auction {
   function bid() external payable ;
}

contract MaliciousContract {

   function bid(address auctionAddress) public payable {
       Auction auction = Auction(auctionAddress);
       auction.bid.value(msg.value)();
   }

   function () public payable {
       revert();
   }
}

Real World Example:  Few people didn’t get compensated because creators of King of the ether pyramid scheme forgot to check the return value of send call.

Lesson learned:

  • Never assume that sending funds is always successful.
  • Always check the return value from low-level call functions (call, callcode, delegatecall and send)

Tip: Avoid invoking too much logic(e.g., multiple send calls) in one transaction, because your transaction might run out of gas. 1100 ETH(~8k USD at that time) got stuck in limbo because GovernMental Ponzi scheme contract was programmed to iterate over growing array of integers in the jackpot payout procedure. Eventually, array became too long and transaction always ran out of gas.

How to mitigate:

  1. Always handle the possibility that external call can fail by checking return value when low-level call method (call, callcode, delegatecall and send) is used.
  2. It is recommended to use Withdrawal pattern (also known as favor pull over push for external calls ) and only let the user withdraw funds after the fact instead of sending money right away. Some users might complain that withdrawing funds is an additional interaction with the contract which affects usability, but it is ultimately up to the contract creator to decide if the additional security is it is worth the sacrifice in usability. This article offers an interesting analysis of user preferences when it comes to getting funds out from contracts.

 

“Numbers behave as expected”

There are 2 types (with 4 subtypes) to store numbers in Solidity:

  • Integers – signed and unsigned integers of various sizes
  • Fixed point numbers – signed and unsigned fixed-point number of various sizes

Currently, fixed-point numbers are not fully supported; therefore we will discuss only integers. Unsigned integers are known for their overflow and underflow behavior which might surprise many developers.

Overflow happens when an unsigned integer(uint256) variable has a maximum integer value (2²⁵⁶-1), and it is increased by 1, then its value becomes 0. This behavior is similar to car odometer roll-over.

uint256 max = 2**256-1; // max has maximum value which can be stored in unsigned integer
max += 1; // max has 0 value

Underflow works in a similar but opposite way; it occurs when an unsigned integer(uint256) variable has the value of 0, and it is decreased by 1. It’s value then becomes the maximum possible integer value(2²⁵⁶-1).

uint256 min = 0; // min has 0 value
min -= 1; // min has 2**256-1 value

Following simplified token contract demonstrates how dangerous overflow and underflow can be.

pragma solidity ^0.4.22;

contract Token {

 mapping(address => uint) balances;

 function transfer(address _to, uint _value) public {

   require(balances[msg.sender] – _value >= 0);
   balances[msg.sender] -= _value;
   balances[_to] += _value;
 }
}

The require condition in transfer function might look correct at first glance, but only until you realize that operations between two uints produce unit value. It means that  balances[msg.sender] – _value >= 0 condition is always satisfied because unit minus unit operation produces unit and unit is always greater or equal to 0. A malicious user can spend more funds than he owns, because of the contract’s faulty require condition. Furthermore, a malicious user can take possession of a very large amount of tokens by transferring more tokens than he owns, because his balance will underflow to the substantial integer value. E.g., if a malicious user owns 100 tokens and he tries to transfer 101 tokens, then he will end up with 100 – 101 tokens which equals to maximum uints value (2²⁵⁶-1) tokens.

Real World Example: Developers responsible for POWH Coin didn’t secure uint operations in withdrawing logic against overflow/underflow, and a unknown hacker was able to withdraw an infinite number of PoWH’s tokens and drained whole contract’s balance equal to 2000 ETH (~2.3M USD at the time).

Lesson learned: Watch out for variables with unsigned integer type and keep in mind the possibility of overflow and underflow.

How to mitigate: It is recommended to use OpenZeppelin’s SafeMath library to avoid overflows and underflows.

Note: Overflow and underflow behavior might become a problem of the past if the future version of Solidity is changed to throw an exception instead of allowing unsigned integers to overflow/underflow. There is an ongoing discussion about this topic in Ethereum community.

 

“Using external libraries is safe”

Solidity offers low-level delegatecall function which allows contract A (calling contract) to execute a function of contract B (called contract) with the context of the contract A. This function is very convenient when there is need to run/reuse code of the external library, but it poses a significant security risk because it grants called contract/library full access to state of calling contract.

Let’s use following example to manifest how an attacker can misuse delegatecall in naive contract to steal balance or take ownership of naive contract via dangerous library.

pragma solidity ^0.4.22;

contract DangerousLibrary {
   address public owner;

   function safeAndUsefulFunction1() public {
       // do something safe and useful
   }

   function safeAndUsefulFunction2() public {
       // do something safe and useful
   }

   function dangerousFunction1() public {
       selfdestruct(msg.sender);
   }

   function dangerousFunction2() public {
       owner = msg.sender;
   }
}

contract NaiveContract {
   address public owner;
   address libraryAddress;

   constructor(address _libraryAddress) public payable {
       libraryAddress = _libraryAddress;
       owner = msg.sender;
   }

   function callSafeAndUsefulLibraryFunction() public{
       libraryAddress.delegatecall(msg.data);
   }
}

When attacker calls NaiveContract’s callSafeAndUsefulLibraryFunction function with call data that contains a signature one of the dangerous functions of the dangerous library, then the attacker can destroy the instance of NaiveContract and send the balance of NaiveContract to himself or can take ownership of naive contract.

The example of call data that would invoke dangerous function is: bytes4(keccak256(„dangerousFunction1()“))

Real World Example: This type of attack was used in Parity Hack where unknown attacker stole 150,000 ETH (~30M USD at the time).

Lesson learned: Never use delegatecall with arbitrary data in your contract.

How to mitigate: Avoid using delegatecall if you can and if you decide to use it then think twice if the library you are about to call can be trusted.

“Sending funds with call function is safe”

Solidity offers 3 functions to send funds: send, transfer and call. The most significant difference between first two and the third one is that call by default forwards all remaining gas. It means that if the receiver of funds is contract, then the receiving payable function can use forwarded gas to execute additional logic/code.

The following contract explains how usage of call can lead to serious security flaws.

pragma solidity ^0.4.22;

contract Bank {

 mapping(address => uint) balances;

 function deposit() public payable {
   balances[msg.sender] += msg.value;
 }

 function withdraw(uint _amount) public {
   require(balances[msg.sender] >= _amount);
   require(msg.sender.call.value(_amount)());
   balances[msg.sender] -= _amount;
 }

 function balanceOf(address _who) public view returns (uint) {
   return balances[_who];
 }

 function getTotalBalance() public view returns (uint){
     return address(this).balance;
 }
}

Bank contract’s withdraw function has two weaknesses:

  • It uses call to send funds
  • It reduces message sender’s balance after it sends requested amount to the message sender

Combination of described weaknesses makes withdraw function vulnerable to reentrancy attack.

An attacker would be able to drain Bank contract’s balance via reentrancy attack with following malicious contract:

pragma solidity ^0.4.22;

interface Bank {
 function deposit() external payable;
 function withdraw(uint _amount) external;
}

contract MaliciousContract {
   Bank bank;
   
   constructor(address _address_to_attack) public {
       bank = Bank(_address_to_attack);
   }

   function deposit() public payable{
       bank.deposit.value(msg.value)();
   }

   function withdraw(uint _amount) public {
       bank.withdraw(_amount);
   }

   function() payable public {
       if (address(bank).balance >= 1 ether){
           bank.withdraw(1 ether);
       }
   }

   function getBalance() public view returns (uint){
       return address(this).balance;
   }  
}

An attacker would first deploy MaliciousContract contract with the address of Bank contract instance as constructor address_to_attack parameter. Then he would deposit 1 ETH to Bank contract instance via deposit function of MalicousContract, which would increase MaliciousContract instance balance in Bank contract to 1 ETH. Now MaliciousContract is ready to execute reentrancy attack.

MaliciousContract would initiate reentrancy attack by withdrawing 1 ETH from Bank via it’s withdraw function. When Bank contract sends requested 1 ETH to MalicousContract via low-level call function in withdraw function, then MalicousContract receiving function (fallback function) is forwarded all remaining gas. It can invoke Bank contract’s withdraw function again, which would send one additional ETH to MalicousContract because MalicousContract’s balance in Bank contract hasn’t been decreased yet. This recursive invocation of Bank’s withdraw function from MalicousContract’s payable fallback function would stop once Bank’s balance is empty or transaction runs out of gas.

Real World Hack: This type of attack was used during TheDAO hack when unknown attacker stole 3.5M ETH (~50M USD at the time). This event led to Ethereum hard fork which produced 2 separate coins: Ethereum and Ethereum classic.

Lesson learned: Sending funds via call function is dangerous because it forwards all remaining gas which allows the attacker to run potentially malicious code in the payable function.

How to mitigate:

  1. Use send or transfer functions instead of call to send funds, because they do not forward enough gas to execute malicious code.
  2. If you have to use call function (e.g., receiving function requires more than 2300 gas to execute its code), then make sure that you follow Checks-Effects-InteractionsMessage sender’s balance should be adjusted before funds are sent in order to make withdraw function of Bank contract “reentrancy-safe”.
 function withdraw(uint _amount) public {
   require(balances[msg.sender] >= _amount);    // checks
   balances[msg.sender] -= _amount;             // effects
   require(msg.sender.call.value(_amount)());   // interactions
 }

      3. Another option is to utilize mutex(exclusive lock) in withdraw function which would ensure that withdraw function cannot be re-entered until the balance is adjusted and the lock is released.

 function withdraw(uint _amount) public {
   require(!is_withdrawing[msg.sender]);
   require(balances[msg.sender] >= _amount);
   is_withdrawing[msg.sender] = true;
   require(msg.sender.call.value(_amount)());
   balances[msg.sender] -= _amount;
   is_withdrawing[msg.sender] = false;
 }

“It is easy to generate random number”

To generate a pseudo-random number, you need seed. Hiding your seed on blockchain is not possible, because everything is visible to everyone. It might be tempting to use one of the apparently “hard-to-predict” block variables – like block-hash and block timestamp as a source of entropy, but these variables can be to the certain extent predicted and influenced by miners. Malicious miner can precalculate block-hash or set block timestamp to the desired value to exploit contract’s function relying on the unpredictability of block variables.

Real World Hack: Creators of SmartBillions used block.blockhash function to generate lottery numbers and they lost 400ETH (~125k USD at the time) due to this mistake.

Lesson learned: It is challenging to generate a random number on the blockchain.

How to mitigate: Do not use block variables (block-hash, block timestamp, etc.) to generate random numbers. Here you can read more about why not to use block variables when aiming for randomness.

 

Conclusion

The success of Ethereum and Solidity is primarily determined by the user’s level of confidence in its ability to operate safely and keep funds secured. Every hacked smart contract and every token stolen leaves hard to fade scars on Ethereum’s and Solidity’s reputation. In order to deliver smart contracts with the highest security standards, it is necessary for a smart contract developer to:

  • Stay up-to-date with latest developments in Solidity language and Ethereum platform
  • Follow best practices, security recommendations and smart contract security patterns from Solidity documentation and leading smart contract security organizations  
  • Employ security audits performed by professional external smart contract auditors

We offer comprehensive Smart contracts security audit.

If you are interested in getting your smart contract audited by the team of smart contract experts, please contact us at sales@nethemba.com.