To learn more, please see: https://marketsquare.io/projects/rocket-boot
To learn more, please see: https://marketsquare.io/projects/core-bridge
In advance of the Core 3.0 launch, I reported multiple security vulnerabilities which were subsequently patched in the prerelease code. Since they were fixed before Core 3.0 was released, there can be no formal disclosure, but nevertheless these contributions have made Core 3.0 stronger and safer.
High offset values in database queries could have stalled a node: Specially crafted requests to the public API server could cause prolonged high CPU usage which led to resource starvation. These requests could have been parallelised to prevent a node from staying synchronised with the network.
Rate limiter was ineffective due to HTTP header manipulation: The rate limiter used the X-Forwarded-For
header, if it existed, to determine the IP address of each connection to enforce rate limits, rather than the IP address of the connecting socket. This permitted a malicious user to either bypass the rate limiter or deny service to legitimate IP addresses by deliberately exceeding the rate limit using a spoofed IP address.
Peer list requests could return more peers than the schema allowed: Peers always returned their entire peer lists when requested via p2p.peer.getPeers
but the reply schema enforces a maximum of 2000 peers in the response. It was possible for a malicious user to set up enough peers to exceed this amount, so node peer lists would contain more than 2000 entries. This meant that new peers would not be able to join the network as they would reject the incoming peer lists, and existing nodes would be unable to receive updated peer lists, eventually disconnecting when receiving an oversized peer list.
Too high timeout value could stall a node: Core had a timeout value of 10 minutes when downloading blocks, so it would wait 10 minutes for a response before trying another peer. This meant an attacker could halt a node for 10 minutes on boot or during synchronisation by deliberately not sending a response to a p2p.peer.getBlocks
request.
Maximum permitted payload size was not reset in the peer connector: Whenever Core issues a request to a peer, it sets the maximum response payload size according to the type of request being made, but that value was not reset once the response to the request had been received. A malicious peer could have therefore bombarded the socket with large payloads once a higher maximum payload size had been set.
Additional properties were allowed when sending transactions: The p2p.peer.postTransactions
endpoint permitted additional objects inside the root data
object in addition to the array of transactions. This allowed a user to send a complex payload which consumed too many resources to parse, potentially resulting in missed blocks and loss of network synchronisation.
Long form values in signatures could have changed transaction or block IDs: ECDSA signatures permitted signature lengths to be expressed in either a short form or long form notation. Swapping an existing short form length to the long form equivalent would mutate the ID of an existing block or transaction without invalidating the signature, which could have led to transaction replay on non-AIP11 networks or network forking if two valid blocks - one with the short form length and the other with the long form length - were propagated at the same height.
Multipayment values were not validated properly: There was insufficient validation on incoming multipayment transactions to check that each recipient and amount was the correct data type, so an attacker could set their values to a large object which would take a long time to parse. This processing delay would stall a node and prevent it from staying in sync with the network.
Client-side graceful disconnection payloads were not sanitised by the socket server: No sanitisation was performed when a client-side graceful disconnection payload was received, so an attacker could add many extra irrelevant properties to this payload which were passed through unchecked to the transport codec and framework for more parsing and processing. This was time consuming and could cause the socket workers to use up all CPU cycles, rendering a node unable to function, and in the event of an active delegate, it would miss blocks.
Call ID in P2P requests could have been set to any value: Every request that requires a response must include a call ID value which is also included in the response so the client knows which request the response is for. Although Core uses sequentially incrementing numerical values for the call ID, it did not properly prohibit non-numerical values. This allowed a malicious user to send a valid request but with a large object as the call ID which would prevent a node from operating correctly as its socket workers would be too busy processing the large object.
Duplicate websocket payload properties were able to take down a node: An attacker could add a large object as a property to any request, then add a second property with the same name but with a valid value. This still passed validation checks since the latter value replaced the initial value, but the extra processing incurred by the initial value caused the socket workers to become jammed at maximum CPU capacity, rendering a node unable to function.
Slow query stopped nodes when requesting blocks from specific generators: The /api/delegates/{delegate}/blocks
endpoint could, in some circumstances, trigger a slow query when requesting a list of blocks. Those requests to this endpoint could be parallelised to overwhelm the database server and prevent the node from responding.
Reviver function in the transport codec could cause denial of service: Every message that is sent or received over the peer to peer protocol is processed by a custom codec. Part of the decoding process in that codec attempts to parse the received message string with a reviver function. If that function failed for any reason, the rate limiter was not triggered, so a malicious user could execute a denial of service attack by continually and uninterruptedly sending payloads that caused the reviver function to fail.
Incoming connections were not banned when failing basic validation checks: Requests to the socket server must adhere to a basic structure containing data
and headers
properties, which must be objects nested inside a root data
object. If the request did not comply with this specification, the connection was terminated but the IP address was not banned. This meant a malicious user could continuously reconnect and keep sending non-conformant requests in a tight loop, using up CPU time.
Exceeding individual but not global rate limit evaded ban: If a request exceeded the rate limit on a specific endpoint, the connection was closed but not blocked. By continually targeting endpoints with restrictive rate limits in this manner, they would never hit the global rate limit which triggers a ban, so they were never blocked. Consequently they could continue connecting and sending more requests to tie up the socket workers to ultimately stop a node from functioning correctly.
Automatic peer reconnection did not reattach socket event listeners: When a new outbound connection is made by the peer connector, Core adds event listeners to the underlying socket to handle errors, abnormal packets and rate limiting. However, these listeners were only added to the initial socket that was created for the connection. If the connection was lost which triggered an automatic reconnection, a new socket was created without these event listeners.
Schema violation requesting common blocks did not close the connection: The p2p.peer.getCommonBlocks
endpoint schema requires an object containing an array of block ID strings. If the array contained items other than strings, an error was returned to the client but the connection remained open. This allowed a user to keep sending a malicious payload to this endpoint without being banned.
Blocks were accepted but not propagated if received out of slot: A block was only broadcasted onwards to other peers if the current slot time was less than or equal to the block timestamp. This meant it was possible to delay the broadcast of a block until the next slot began so receiving nodes would accept it but not broadcast it further. This could cause significant problems for the network with quorum calculations and overheight block headers.
Requesting blocks at a very high height locked up PostgreSQL: The PostgreSQL process could be jammed at 100% CPU usage across multiple cores by requesting one or more blocks from a node starting at an exceptionally high height. This request took longer to complete than the rate limit of once every 2 seconds for the relevant endpoint, so a bad actor could keep sending these requests constantly to lock up PostgreSQL at 100% CPU usage across multiple cores.
Binary data payloads could stop forging: WebSocket transmissions can be either text or binary, and although Core only uses text messages, there was no filter to block binary payloads in the peer-to-peer layer. It was therefore possible to send binary payloads which would incur large overheads when automatically unmarshalling the binary buffer which would tie up the socket workers to prevent a delegate node from obtaining the correct network state, rendering it unable to forge.
Large payloads sent to internal endpoints prevented forging: There is no schema validation for incoming data received on most of the internal peer-to-peer endpoints, so a malicious user could send a large JSON payload to any of these endpoints which is time consuming to parse. Although the connection would be terminated because the user was not authorised to access the endpoint, the associated IP address would not be banned, so a bad actor could keep reconnecting and sending this payload in a loop which would jam the socket server worker processes above 100% CPU usage.
Outgoing connections were not destroyed after receiving unsupported WebSocket frames: An outgoing socket connection was directly terminated after receiving an unsupported WebSocket frame without notifying the underlying client framework that the connection was deliberately closed. This meant the framework would automatically attempt to reconnect to the peer again, so a malicious user could continue sending more unsupported WebSocket frames in a denial of service attack.
Peer lists could exceed the maximum permitted payload size: P2P peer lists contained data about the plugins each peer was running. This meant a malicious user was able to construct a swarm of peers all configured with multiple plugins so the cumulative size of peer lists would grow to exceed the maximum permitted size of 102400 bytes. This would prevent any node from receiving an updated peer list, stopping new nodes from joining the network and would disconnect existing ones when receiving an oversized peer list.
Outgoing sockets were not properly rate limited: Although incoming sockets were properly rate limited, outgoing sockets were not, as the only rate limit applied to outgoing sockets was for internal ping messages. Once Core made an outgoing connection to a malicious peer, that peer could continually send packets of data to overwhelm the node, as long as the data was anything other than an internal ping message.
Newly connected peers did not have an initial maximum payload limit: Core dynamically adjusts the maximum payload size of a peer's response depending on the type of request being made to the peer, however it did not set a default maximum payload size when initially establishing a connection to a peer. This allowed an attacker to send a very large payload as soon as the connection was opened prior to any request being made, which would crash Core by causing an out of memory condition.
Insufficient transaction asset validation: There was a flaw in the schema validation of incoming transactions which meant an attacker could add additional assets to a multipayment transaction while still passing validation checks. Adding too many of these additional assets would cause delays in parsing the transaction, leading to backlogged requests, IPC timeouts and in the case of delegate nodes, an inability to forge blocks.
HTTP header manipulation caused out of memory crashes: It was possible to coerce ARK Core to download any arbitrary file from a remote server by transmitting a HTTP 303 header response to the peer communicator. If the downloaded file was gzip compressed, it was automatically decompressed and stored in memory, potentially expanding to many times its original size. A specially crafted compressed file could grow too large, triggering an out of memory error which would crash the node.
Prepending zeros in the hex representation of a signature would change its ID: For the purpose of cryptographic verification, R and S components of ECDSA signatures are integers, but the hashing process used by Core to calculate the ID of a transaction or block uses their byte sequence instead. Core did not check for the presence of extra zeros at the start of either the R or S components of a signature, so prepending extra zeros to either component would change the ID of any block or transaction since this would modify the byte sequence, while remaining cryptographically verified as the extra zeros were ignored during verification since that process uses the integer values instead. This meant transactions could have been replayed on non-AIP11 networks since they would have had new IDs, and it could have also led to forking of the network if multiple blocks were propagated at the same height with different block IDs.
Negative values were erroneously accepted in ECDSA signatures: Core did not check for negative values in the R or S components of ECDSA signatures, which are not allowed in the specification. This meant that it was possible to maliciously modify an existing valid signature to include negative values for either R or S and this signature would still erroneously verify as true, since the values were internally normalised as positive integers. However, by doing so, the block or transaction would have a different ID, potentially leading to transaction replay on non-AIP11 networks or network forks due to conflicting blocks being propagated at the same height.
DER signature manipulation could fork the network, roll back and replay transactions: Blocks and transactions signed with DER encoded signatures could be manipulated by appending extra data to the end of the signature outside of the R and S values. This meant the transactions and blocks were still cryptographically valid but would have a different ID, allowing for transaction replay on non-AIP11 networks and to persistently fork the network by propagating different blocks at the same height.
Pool poisoning could stop delegates forging any transactions: There was a low limit on the number of transactions returned from the transaction pool for inclusion in a new block prior to filtering them. A bad actor could repeatedly spam the pool with hundreds or thousands of invalid transactions with a very high fee - which, as they were invalid, would never be forged, so the actual cost was zero - but the high fee meant they would take precedence over all genuine transactions in the pool, so after filtering, the block would be empty.
Port ping payload sizes were unchecked and could cause bandwidth flood attacks: It was possible to craft a malicious HTTP GET response to the peer communicator to redirect traffic to a third party server. This could have been used, among other possibilities, to download very large multi-gigabyte files to consume all the bandwidth of a node.
Slow PostgreSQL query attack could have caused delegates to miss blocks: Filling enough blocks with transactions to make a large cumulative payload and then repeatedly calling p2p.peer.getBlocks
to download that set of blocks from multiple IP addresses at the same time every second could make any forging node miss blocks. This was because of a slow query which took too long to complete in those circumstances, which caused significantly elevated CPU usage on the main process. This could quickly amplify as new requests could be made before all the previous ones were completed, continually stacking more requests on top of the existing ones until the database was overwhelmed.
Consecutive big blocks could exceed the maximum payload limit: By repeatedly sending large transactions to fill consecutive blocks, it was possible to exceed the maximum permitted client-side payload limit for any WebSocket connection. This would trigger peer bans when trying to download a batch of blocks that exceeded this payload size and meant no new nodes could ever join the network as they would not be able to download a batch of blocks exceeding this limit, thus preventing any new relays from syncing with the network.
ECDSA-signed block and transaction signatures were malleable: Transactions and blocks signed using ECDSA could have their signatures recalculated to generate new IDs for the transactions or blocks, while still remaining cryptographically verified. This could have been used to replay existing transactions on networks where AIP11 was not active, and in all cases, could have been used to repeatedly cause forks by recalculating the signatures of incoming blocks and broadcasting the recalculated blocks onwards, since this would lead to multiple different but valid blocks at the same height on the network.
Delayed completion of peer verification stopped nodes forging: It was possible to stop delegates from forging by delaying the peer verification process when a node was due to forge. As a forging node enters its slot, it re-verifies its peers by sending a p2p.peer.getStatus
request to all of its peers. If a peer responds with a higher height than its own height, the node will query that peer further via p2p.peer.getBlocks
to fetch the block(s) at the higher height to verify they are not forked. A malicious peer could respond with a deliberately higher height and then delay the ensuing response to p2p.peer.getBlocks
until the forging slot was over, so the peer verification would not complete and the delegate would miss its slot.
Block ID-based exceptions were vulnerable to preimage attacks and blockchain poisoning: Block IDs can be added as exceptions which are accepted even if they fail verification but there was no check to ensure the correct transactions were inside the excepted blocks. This meant that transactions could be mutated or changed entirely inside of excepted blocks and the node would still accept them since it no longer mattered that the block would not pass verification checks.
Block schema violations could halt the blockchain: It was possible to stop a node from receiving new blocks by sending a single block to it via p2p.peer.postBlock
which contained a chained block header with a numberOfTransactions
value greater than 0 but where no transactions were included in the block. This caused an error to be thrown which stopped the processing queue indefinitely.
Induced slow block propagation forked the network: Specially crafted payloads could be constructed to target either the public API or the peer-to-peer layer which would cause a delegate to delay sending its newly forged block until the next forging slot. This could cause a whole network fork since it would result in two conflicting blocks being propagated at the same height.
Marshalled block payloads using the peer-to-peer transport codec were not sanitised: Core 2.6 introduced a new p2p transport codec for blocks, but any payload using this codec was not sanitised properly. The end result was that additional objects could be added to the payload data if the new transport codec was used, which could have been exploited to stop delegates forging by overwhelming their servers with too many concurrent objects.
Tree memory structure exceeded maximum call stack size when fetching unconfirmed transactions to forge: By continually filling the transaction pool with valid transactions with an increasing fee of one arktoshi each time, the call to getUnconfirmedTransactions
when forging would error out as the maximum call stack size would quickly be exceeded. At that point the network would stall as nobody would be able to forge any new blocks since the call to request the transactions to forge would always fail.
Nonce comparison took too long to complete when fetching unconfirmed transactions to forge: By continually filling the transaction pool with valid transactions, it will grow at a rate faster than the transactions in the pool can be consumed by forging blocks on the network. When it reached a certain threshold (amount varies depending on node specification, but always before reaching the upper limit even on the most powerful nodes), the call to getUnconfirmedTransactions
would time out due to the nonce comparison in the transaction pool.
Overloading the public API could stop the transaction and block processing on a node: The public API could have been overloaded to stop a node from processing any incoming transactions or blocks by opening simultaneous requests to the /api/blocks
or /api/transactions
endpoints.
Long-lived HTTP requests via the P2P layer could crash the node: It was possible to bombard a peer with long-lived HTTP requests by opening a connection to the correct /socketcluster/
path, but never upgrading the connection to a WebSocket. This connection could have been kept alive by sending a HTTP header every second (e.g. Host: X.X.X.X
over and over again in a 1 second loop) and never sending the Upgrade: websocket
or Connection: upgrade
headers. The consequence of this was the ability to overload the server with open TCP sockets that are never established as WebSockets (so rate limit protections never kick in), once again leading to a situation where an attacker could spawn thousands of connections to use up all the node's file descriptors to crash the underlying operating system.
Pool wallet manager could lock up funds by not updating multipayment balances: The transaction pool wallet manager balance did not always increase when receiving a multipayment transaction. This was because transactionHandler.applyToRecipient
in acceptChainedBlock
was only called if there was a matching recipientId
already in the pool wallet manager. Multipayments could have been manipulated to lock the amount of the multipayment to prevent the recipient from accessing the funds as their pool wallet manager balance did not increase if the recipient's address was present in the pool wallet manager but the sender's address was not. This prevented the recipient from sending a transaction for an amount that is greater than the sum of non-multipayment transactions in their account.
Plain HTTP connections to the p2p port could crash the node's operating system: Core did not block any connections to any invalid HTTP paths accessed via the p2p layer's underlying HTTP server, so a user could send thousands of plain HTTP connections to invalid paths and these connections would remain open. It was possible crash a server by using a cluster of attack nodes to establish hundreds of thousands of connections to use all available file descriptors.
A malicious block containing thousands of transactions could take down a node: A malicious user could keep sending bad blocks containing thousands of transactions inside them. As all the transactions inside the blocks were verified and validated, this maxed out the CPU usage and prevented nodes from operating correctly.
Opening thousands of sockets caused high CPU/memory usage and full server crashes: It was possible for an attacker to open thousands of connections to a node because there was no filtering to prevent multiple connections per originating IP address. Each active connection used a file descriptor in the operating system, and the number of available file descriptors is limited. An attacker could open enough simultaneous connections to use all the available open file descriptors on a node, which would crash it completely, since the operating system was no longer able to open any files.
Broadcasting invalid WebSocket opcodes caused significant network degradation and missed blocks: Sending malformed WebSocket packets with reserved or unimplemented opcodes would trigger the socket's onerror event handler, but Core did not listen for this event and the connection was not blocked. The process of repeatedly throwing the error was sufficiently computationally expensive that it was possible to take down a node and stop it forging by sending a constant stream of malformed packets.
Unhandled unemitted events could trigger high CPU spikes and propagation delays: Core only incremented the rate limiter when the SocketCluster emit event was fired, but there were some circumstances where specially crafted payloads would not trigger this event. This meant that anyone could flood a node with such messages which did not increment the rate limiter.
JSON payloads with too many key-value pairs were too CPU intensive to parse: There was a Denial of Service security vulnerability inside the P2P layer which could be triggered by sending valid JSON strings to valid endpoints, but where the JSON string contained too many key-value pairs. This was too time consuming to parse, so a node was unable to process other business, leading to missed blocks and an inability to forge.
Multiple disconnect JSON packets caused high CPU utilisation: A user could maliciously craft a disconnect JSON packet and constantly send it in a loop to overwhelm a node. As it was a perfectly valid packet, it did not get caught by any of the sanitisation checks that were in place, which had the effect of stalling nodes so they could not handle genuine traffic.
Sending HyBi WebSocket headers with no data could stop nodes forging: It was possible to stop a node forging or miss its slot by continually sending payloads containing only a HyBi WebSocket header that signals a data frame, but one that actually contained no data. The effect was that the worker processes hit 100% CPU usage and could not complete their tasks fast enough to keep up with the network.
Ping control frame bombardment could prevent block propagation: Every time a node's WebSocket received a ping control frame, it would reply with a pong control frame. If a node was spammed with ping control frames, the node workers would reach 100% CPU usage by continually sending pong control frames back to the client, so they would struggle to cope with genuine requests. This meant that delegates were too resource-starved to forge in time, leading to either missed blocks or delayed blocks on the network.
Externally hitting internal P2P endpoints could stop a node handling requests: Accessing an internal endpoint from an external unauthorised connection would send an Unauthorised error message to the client, but the socket would still be connected. As the connection remained open, in certain circumstances it was possible to trigger a node to continually send Unauthorised messages to block its workers so they could no longer process legitimate requests.
Rate limiting was ineffective due to inappropriate disconnection methods: Connections that exceeded the rate limit were gracefully disconnected by sending a Forbidden
error message to the client, along with a close WebSocket frame. If a node was spammed with requests exceeding the limit, the node workers would reach 100% CPU usage by continually sending these messages and frames back to the client, so they would struggle to cope with genuine requests. This meant that delegates were too resource-starved to forge in time, leading to either missed blocks or delayed blocks on the network.
Malformed messages on the P2P layer could hang up a node and stop delegates forging: It was possible to take down a node and either stop it forging or delay its block propagation to fork the network by exploiting a weakness in SocketCluster by sending raw payloads to the node that did not conform to the SocketCluster JSON format.
P2P endpoint request events were not sanitised: Spamming nodes with multiple simultaneous requests to invalid websocket endpoints on the peer-to-peer layer with large payload sizes created a memory leak. This killed the socket workers and prevented delegates from forging.
Core plugin names were not length restricted so could cause DoS in peer lists: There was no limit to the length of plugin names returned in peer lists. This could have been used to conduct a bandwidth flood attack by setting up a malicious node to send massive strings as plugin names resulting in overlength peer list responses that could not be processed in time due to their excessive size.
Peer lists could become too large and be manipulated to become a DDoS network: There was no limit to the number of peers that could be added to peer lists. This could lead to extremely large peer lists being shared across peers which could not be processed in sufficient time, leading to delegates being unable for forge blocks as their nodes would be too busy processing the large amount of peers. Furthermore, if IP addresses of non-Core peers were maliciously added in bulk, this could lead to all legitimate peers being used as an unintentional DDoS network.
Peer-to-peer postTransactions
endpoint could be spammed to overwhelm nodes: The postTransactions
peer-to-peer websocket endpoint could be spammed with bundles of invalid or expired transactions. The cryptographic process of verifying all those transactions would overwhelm a node to prevent it from being able to keep up with the network. It would stop receiving blocks and delegates would be unable to forge. Since the spam transactions were invalid or expired, they would never be forged.
Delegates could be forced to forge empty blocks and genuine transactions could be evicted from the pool: Unforgeable transactions could fill the transaction pool to prevent genuine transactions being forged as the forging process would retrieve transactions up to the maximum transactions per block size, filter away the invalid ones and return the result. This could possibly return zero valid transactions after filtering if the transaction pool contained more than the maximum number of transactions per block of unforgeable transactions with a higher fee than the genuine transactions. Additionally, sending a high volume of these invalid transactions could evict genuine transactions from the transaction pool at zero cost to an attacker.
Unverified transactions in bad blocks could purge genuine transactions from the pool: When a block was rejected for failing verification, any transactions from the sender(s) of any transactions in the block were purged from the transaction pool, even if they failed verification. This meant anybody could create a bad unverified block containing fake transactions to delete genuine transactions from that wallet from the transaction pool.
Race condition resulted in blocks containing already forged transactions: If a node received a high volume of transactions entering the transaction pool in a short time period, it would not filter out all the recently forged transactions from incoming blocks. This meant that when a delegate tried to forge, it may have included already forged transactions in its block, which would then be rejected by the network, causing the affected delegates to miss blocks until their pools were cleaned.
Block header manipulation in quorum calculations prevented nodes forging: Forging nodes could be tricked into thinking they were double forging, even when they were not. This activated the automatic protection which stopped the nodes from forging. As the delegates were not actually double forging in the first place, they did not produce any blocks. If all delegates were targeted, the chain completely stopped.
A forged second signature registration did not invalidate existing transactions from the same sender in the transaction pool: It was possible to stall the blockchain by manipulating second signature registrations. It involved broadcasting a transaction from a wallet without a second signature, then registering a second signature for that wallet which was forged prior to the initial transaction being forged.
Transactions signed with a second signature prior to the second signature registration being forged halt the blockchain: A node that received a second signature registration, immediately followed by a transaction signed with that second signature prior to the second signature registration being forged in a block, would be unable to forge a new block as the second transaction was accepted by the node and added to its transaction pool but would not validate in a block since the second signature registration was not forged yet. As transactions are broadcasted to all nodes, this would stop the chain until the pools were cleared as nobody would be able to forge a block.
Receiving a block containing non-valid transactions causes peers to roll back: A received block containing a transaction that cannot be applied causes all peers to roll back. The accept block handler triggered a rollback if a block contained a transaction that cannot be applied, which was unnecessary when it was a case that the current block could not be applied due to the transactions within it.
Delayed block propagation causes the next delegate to miss its block: If a delegate forged a block but was late broadcasting it, the next delegate in line would forge at the wrong height as it would not have received the previous block. By the time it attempted to broadcast the block, the previous delegate's delayed block would have been received by most (or all) of the network, meaning the newly forged block at the same height would be rejected and thus the delegate would miss its slot.
Public API endpoint open to possible DDOS attack: Endpoint /api/v2/delegates/{delegate}/voters/balances
did not paginate its results. This was a vector for DDoS as anyone could request the vote balances of every voter of a delegate in one API call. For delegates with large number of voters (>5000) this could overload the server even before the HTTP rate limiting kicked in.
Transactions near to the HTTP POST
payload size limit can stop delegates forging and halt the chain: The maximum HTTP POST
payload is 1048576 bytes but there was no logic to ensure that blocks only contained transactions that would fit in a block below that size limit. It was possible for any network user to send deliberately oversized transactions that would approach (but not exceed) this limit. Although nodes would accept these larger transactions as valid, when it was time to forge them into a block, the additional block headers would make the block size larger than 1048576 bytes. This meant all nodes would reject the block with HTTP error 413 Request Entity Too Large
, so the forging node would miss its block and the oversized transactions would remain in the transaction pool, meaning the node would be unable to forge until the oversized transactions expired (the default time being 6 hours).
Conflicting delegate registration transactions not detected by the transaction guard: A user could try registering the same delegate name using multiple wallets in a single block. It would result in a stalled network because no delegate would be able to forge a valid block as they would all try to forge blocks containing the conflicting transactions. If exploited maliciously, any bad actor could have stalled the network permanently by repeatedly sending conflicting transactions to ensure the transaction pools were never clean so the network could not recover.
Malicious delegate 0-ARK transaction spam: The transaction schema to prevent zero-amount transfer transactions was never enforced by consensus, so zero-amount transfer transactions could be sent by a malicious delegate to spam the blockchain.
Malicious delegate can cause peers to fork and roll back simultaneously: A malicious delegate could craft a block at the correct height but with a timestamp from a previous round that collided with a valid forging slot time for that delegate in the current round. This has the result that the block would be initially accepted, but, because the block was unchained due to the incorrect timestamp, it would cause the receiving node to immediately enter fork recovery mode and roll back the chain.
Fake peers can be added by using non-quad-dotted notation: Peers using non-quad-dotted notation representation could be added to peer lists, which provides a denial of service vector as millions of IPv4 loopback addresses could be added to the peer list which will all resolve to the local node which would overwhelm the server.
Forged blocks by anyone can cause the chain to stop/or start recovering: Anyone can broadcast signed blocks, and when receiving a forged block from a wrong generator, the chain would fork. This also applied to inactive (unknown) generators. If a malicious actor kept broadcasting such blocks, the chain would effectively cease operating.
Forging multiple blocks in a slot and rewards hijacking: Any active delegate could forge multiple blocks within their allocated 8 second slot time, as long as the block IDs were different and were all sent to the same node with an incrementing block height, as each block was considered to be valid and accepted on the chain. This had the effect of generating block rewards for each of the multiple blocks that were forged in the slot, resulting in inflated rewards per round for any delegate that carried out this exploit.
Double forging a block: A malicious forger could forge multiple distinct blocks and broadcast them to different peers causing instability in the network.
IP spoofing: The whitelist could be bypassed by IP spoofing due to the way Core determined the IP of a request. This could also be used to fill up the peer list with loopback IP addresses to cause a DoS attack and prevent block propagation.
Generating new ARK using multi signature transaction: In a multi-signature transaction, the transaction handler only verified the signatures and did not properly conduct balance checks. This made it possible to generate new ARK tokens on the network utilising a multi-signature transaction.
Invalid block received: The lastDownloadedBlock
variable was not reset when discarding invalid blocks. This caused network nodes to continually attempt to download new blocks from the wrong height, effectively halting the network. This issue would have allowed a malicious user to disrupt network nodes and the network itself.