Mavryk_shell.Block_validator
This module is the main entry point to valide blocks and protocols.
type new_block = {
block : Mavryk_store.Store.Block.t;
The block itself.
*)resulting_context_hash : Mavryk_base.TzPervasives.Context_hash.t;
The context hash resulting of block
's application.
It may be the same one as contained in its header depending on the protocol expected semantics.
*)}
Type of a validated block
val create :
Mavryk_shell_services.Shell_limits.block_validator_limits ->
Distributed_db.t ->
Block_validator_process.t ->
start_testchain:bool ->
t Lwt.t
create limits ddb bvp start_testchain
creates a Block_validator
.
limits
contains various timeout
limits.ddb
is used to commit a block on the storage and get the state of the chain for which the block is submitted to validation.bvp
is an instance of the Block_validator_process
. bvp
is a proxy between the shell and the validation part related to the economic protocol (See Block_validator_process
).start_testchain
if set to true allows to run the testchain
.This function is not supposed to fail. It is implemented this way because of the interface implemented by the Worker
module.
type block_validity =
| Valid
| Unapplicable_after_precheck of Mavryk_base.TzPervasives.error
Mavryk_base.TzPervasives.trace
| Invalid of Mavryk_base.TzPervasives.error Mavryk_base.TzPervasives.trace
val precheck_and_apply :
t ->
?canceler:Lwt_canceler.t ->
?peer:Mavryk_base.P2p_peer.Id.t ->
?notify_new_block:(new_block -> unit) ->
?precheck_and_notify:bool ->
Distributed_db.chain_db ->
Mavryk_base.TzPervasives.Block_hash.t ->
Mavryk_base.Block_header.t ->
Mavryk_base.Operation.t list list ->
block_validity Lwt.t
precheck_and_apply ?precheck_and_notify validator ddb hash header ops
validates a block header
ops
of hash hash
. It is a no-op in the following cases:
savepoint
Otherwise it calls the Block_validator_process
process associated to the current validator
.
canceler
is trigerred when the validation of a block fails.peer
is the peer which sent the block.If the validation succeeded it processes as follows:
1. The ddb
commits the block on the storage.
2. If the next block requires a switch of protocol, it tries to fetch and precompile the next protocol.
3. Call notify_new_block
with the committed block
.
An error is raised if the validation failed or if the block was already known as invalid. However, if the first validation
attempt failed because the protocol was missing, it tries to fetch
and download
the protocol before trying to validate the block a second time.
val preapply :
t ->
?canceler:Lwt_canceler.t ->
Mavryk_store.Store.chain_store ->
predecessor:Mavryk_store.Store.Block.t ->
timestamp:Mavryk_base.Time.Protocol.t ->
protocol_data:bytes ->
Mavryk_base.Operation.t list list ->
(Mavryk_base.Block_header.shell_header
* Mavryk_base.TzPervasives.error Mavryk_shell_services.Preapply_result.t
Mavryk_base.TzPervasives.trace)
Mavryk_base.TzPervasives.tzresult
Lwt.t
preapply validator canceler chains_store predecessor timestamp
protocol_data operations
creates a new block and returns it. It may call the Block_validator_process
process associated to the current validator
. If the preapply is a succeeded, the application resulted is cached to avoid re-apply the block if the next call block validation, through validate
, targets the same block.
An error is raised if the pre-apply failed. However, if the first pre-apply
attempt failed because the protocol was missing, it tries to fetch
and download
the protocol before trying to pre-apply the block a second time.
val fetch_and_compile_protocol :
t ->
?peer:Mavryk_base.P2p_peer.Id.t ->
?timeout:Mavryk_base.Time.System.Span.t ->
Mavryk_base.TzPervasives.Protocol_hash.t ->
Mavryk_protocol_updater.Registered_protocol.t
Mavryk_base.TzPervasives.tzresult
Lwt.t
val context_garbage_collection :
t ->
Mavryk_context_ops.Context_ops.index ->
Mavryk_base.TzPervasives.Context_hash.t ->
gc_lockfile_path:string ->
unit Mavryk_base.TzPervasives.tzresult Lwt.t
context_garbage_collection bv index chain_store context_hash
~gc_lockfile_path
moves the contexts below the give context_hash
from the upper layer to the lower layer. For full and rolling nodes, this is considered as a garbage collection. When a garbage collection occurs in another process, a lock, located at gc_lockfile_path
, is taken to ensure synchronous GC calls.
val context_split :
t ->
Mavryk_context_ops.Context_ops.index ->
unit Mavryk_base.TzPervasives.tzresult Lwt.t
context_split bv index
finishes and then starts a new chunk in the context storage layout. This aims to be called at the dawn of each cycle, to improve the disk footprint when running a garbage collection.
val shutdown : t -> unit Lwt.t
val running_worker : unit -> t
val status : t -> Mavryk_base.Worker_types.worker_status
val pending_requests :
t ->
(Mavryk_base.Time.System.t
* Mavryk_shell_services.Block_validator_worker_state.Request.view)
list
val current_request :
t ->
(Mavryk_base.Time.System.t
* Mavryk_base.Time.System.t
* Mavryk_shell_services.Block_validator_worker_state.Request.view)
option