Compare commits

...

423 commits

Author SHA1 Message Date
Ralph
64f6fcecb8
Revert org change until publish to go pkg repository 2024-01-04 12:06:07 -05:00
Ralph
e293c5ab57
Update github repos to lbryfoundation forks 2024-01-04 11:35:29 -05:00
Ralph
5625d54f37
Change to lbryfoundation org 2024-01-04 11:34:08 -05:00
Roy Lee
a0ff51b84a claimtrie: allows '*' in claim name 2022-11-23 08:50:17 -08:00
Roy Lee
4c39a9842c rpcclient: update rescanblockchain support 2022-10-31 00:23:46 -07:00
Roy Lee
f513fca6a7 lbcdblocknotify: reorganize the code with a few updates
1. Fixed a bug, which reads certs even TLS is disabled

2. Persists Stratum TCP connection with auto-reconnect.
   (retry backoff increases from 1s to 60s maximum)

3. Stratum update jobs on previous notifications are canceled
   when a new notification arrives.

   Usually, the jobs are so short and completed immediately.
   However, if the Stratum connection is broken, this prevents
   the bridge from accumulating stale jobs.
2022-10-17 00:03:13 -07:00
Alex Grintsvayg
6728bf4b08 error properly when lbcd fails to connect in HTTP POST mode
in the case where you're e.g. trying to connect to an
invalid address, the err vars in handleSendPostMessage()
were being shadowed inside the for loop. if c.httpClient.Do()
returned an error, that error never got returned upstream.
then ioutil.ReadAll(httpResponse.Body) would get a nil pointer
dereference. this fixes that case.
2022-10-14 11:40:46 -07:00
Roy Lee
979d643594 [lbry] claimtrie: created node cache 2022-09-29 16:45:42 -07:00
Roy Lee
cbc4d489e8 lbcctl: support --timed, --quiet options 2022-09-29 16:45:42 -07:00
Roy Lee
987a533423 rpc: update rpc cmd requests to support multi-account
Most of the updates add optional arguments with default
values.
2022-09-26 11:08:19 -07:00
Roy Lee
6bc9a2b4dd mining: always returns .coinbasevalue in getblocktemplate
Although the BIPs specify that coinbasetxn and coinbasevalue are
mutually exclusive, both the latest bitcoind (22.0.0) and lbrycrd
(0.17.3) return .coinbasevalue regardeless if 'coinbasetxn' is
specified in the capabilities.

We'll make lbcd behave the same for compatibility.
2022-09-25 18:48:59 -07:00
Roy Lee
9bcd3d0591 cotrib: add a helper script to show miner of a bkock 2022-09-23 17:49:01 -07:00
Roy Lee
2adfcd211d rpcclient: add -quiet option to the lbcdblocknotify example 2022-09-23 17:48:05 -07:00
Roy Lee
81ec217899 rpcserver: fix up getblockstats 2022-09-20 23:59:57 -07:00
Guilherme de Paula
5acfa4c81b rpcserver: add GetBlockStats 2022-09-20 23:59:57 -07:00
Roy Lee
c5193e74ac rpc: support hex data output for createrawtransaction 2022-09-14 18:41:04 -07:00
Roy Lee
8a80f0683a [lbry] policy: relax dust thrashold to 1000 dewies/kB
An output is considered dust if the cost to the network to spend the
coins is more than 1/3 of the minimum free transaction relay fee, which
has a default rate of 1000 satoshis/kb

bitcoind refactored dust threshold calculation, which removed the
multiply factor of 3 from the code, but increased the DUST_RELAY_TX_FEE
from 1000 to 3000 (satoshi/kb).

lbrycrd adopted the refactored code but also kept the rate to
1000 dewies/kB, which means:

    An output is considered dust if the cost to the network to spend the
    coins is more than the minimum free transaction relay fee.
2022-09-01 15:28:07 -07:00
Roy Lee
5d7a219e35 rpc: make getblock return orphan blocks with confirmation=-1 2022-08-31 18:32:49 -07:00
Roy Lee
2d04d31894 rpc: implement rescanblockchain rpcclient 2022-08-31 18:32:49 -07:00
Roy Lee
ce37025d5a txscript: validate claimscript size 2022-08-30 15:30:07 -07:00
Roy Lee
98e5771989 rpc: implement setban, lisnbanned, clearbanned RPCs 2022-08-14 21:26:27 -07:00
Roy Lee
ff324e0fdb doc: update snapshot related instructions 2022-08-14 14:17:41 -07:00
Roy Lee
be0d7de8da mining: accomodate pre-BIP0141 coinbase structure
Some popular pool software, yiimp for example, constructs coinbase
in pre-BIP0141 style, which results in rejection of submitblock.
2022-08-12 10:39:26 -07:00
Roy Lee
fcfb2af76f netsync: revert base/segwit encoding hack 2022-08-12 10:39:26 -07:00
Roy Lee
78bed14956 go mod: bump lbcutil to v1.0.202 2022-08-12 10:39:26 -07:00
Roy Lee
fdedbf86f8 mining: include 'segwit' rule when no segwit txns in GBT
According to the BIP0009, all active softfork deployment should
be included in the rules.

We add the '!' to indicate the enforcement if the template has
any segwit transactions in it. Otherwise, plain `segwit` is fine.
2022-08-08 00:49:16 -07:00
Roy Lee
a9351b3e3a lbcdblocknotify: support --run to execute custom command 2022-08-07 23:55:10 -07:00
Roy Lee
e323751218 ci: gofmt with go 1.19
Go 1.19 introduces various updates to gofmt.
2022-08-07 23:40:53 -07:00
Roy Lee
66c8567a27 ci: bump to Go 1.19 2022-08-07 23:40:17 -07:00
Roy Lee
6b0e7592c6 btcjson: remove WebsocketOnly for wallet extension RPCs 2022-07-29 12:10:53 -07:00
Roy Lee
05f52c11a1 docs: update README.md 2022-07-28 17:23:39 -07:00
Roy Lee
ea63a44c7b [lbry] rpcclient: fix stratum update_block format for blocknotify 2022-07-28 08:32:09 -07:00
Jonathan Moody
daa3137dc4 [rpc blockchain] Add support for mediantime, chainwork to RPC getblock. 2022-07-27 10:41:24 -07:00
Roy Lee
b147fe2a5b Revert "[lbry] claimtrie: created node cache"
This reverts commit 8f95946b17.
2022-07-27 10:18:35 -07:00
Jonathan Moody
7f9fe4b970 [rpc mempool] More tweaks to dynamicMemUsage(). Add toggleable assertions for max depth and switch completness. Toggle them when running in mempool_test.go. Drop support for reflect.Map, as it's not needed at this time. 2022-07-18 17:17:56 -07:00
Jonathan Moody
eefb1260eb [rpc mempool] Correct comment BTC -> LBC. 2022-07-18 17:17:56 -07:00
Jonathan Moody
a8a44aa988 [rpc mempool] Hide debugging functionality of dynamicMemUsage(). 2022-07-18 17:17:56 -07:00
Jonathan Moody
abb1b8b388 [rpc mempool] Add support for unbroadcastcount to RPC getmempoolinfo. 2022-07-18 17:17:56 -07:00
Jonathan Moody
13e31d033a [rpc mempool] Add support for usage, total_fee, mempoolminfee, minrelaytxfee to RPC getmempoolinfo. 2022-07-18 17:17:56 -07:00
Roy Lee
5499a2c1b3 [lbry] claimtrie: more verbose error message in ResetHeight 2022-07-17 11:32:33 -07:00
Roy Lee
fae4063046 rpc: remove deprecated and unimplemented 'move' 2022-07-14 15:45:06 -07:00
Roy Lee
8d1005706b rpc: remove deprecated and unimplemented 'setaccount' 2022-07-14 15:43:35 -07:00
Roy Lee
bb93a49349 [lbry] config: allow non-localhost connections with TLS disabled 2022-07-11 16:52:38 -07:00
Roy Lee
d5922cd725 [lbry] version: fix version string handling 2022-07-06 20:44:22 -07:00
Roy Lee
3a179a0eee [lbry] rpc: un-embedded attributes in getaddressinfo result
lbcwallet failed to re-generate RPC help message.

The help message generator doesn't handle embedded fields properly.
2022-07-05 20:12:27 -07:00
Jonathan Moody
ca9b4e5529 Rename nameProgressLogger -> claimProgressLogger and tweak log message. 2022-06-14 11:27:58 -07:00
Jonathan Moody
2b7f065855 Adjust and rename blockProgressLogger -> nameProgressLogger. Use it in makeNameHashNext() to track progress. 2022-06-14 11:27:58 -07:00
Jonathan Moody
b859832907 Copy netsync/blocklogger.go to claimtrie/logger.go. 2022-06-14 11:27:58 -07:00
Jonathan Moody
70852905e0 Allow environment var GOMAXPROCS=<N> to override NumCPU(). 2022-06-14 11:03:27 -07:00
Jonathan Moody
5f7b1f1b4f Copy value received by MergeOlder/MergeNewer so caller can't trash the merge result by modifying the contents. 2022-06-06 14:12:30 -07:00
Jonathan Moody
0241e18f42 Harden Marshal/Unmarshal logic for Change.SpentChildren. 2022-06-06 14:12:30 -07:00
Brannon King
b06df3d750 added buffer pool for pebble merge string 2022-06-06 14:12:30 -07:00
Jonathan Moody
15191b7ede
[lbry] runtime: Add --memprofile option
* Add --memprofile option. Add memprofile to sample config.
* Add --memprofile to doc.go.
2022-06-03 12:08:09 -07:00
Jonathan Moody
92a7a2087a
[lbry] runtime: Allow environment var GOGC=<percent> to override hard-coded SetGCPercent(10). 2022-06-03 09:19:55 -07:00
Brannon King
8f95946b17 [lbry] claimtrie: created node cache 2022-05-26 22:04:33 -07:00
Roy Lee
5d5f53c8d8 [lbry] contrib: add a helper script for generating snapshots 2022-05-26 21:53:59 -07:00
Roy Lee
6e36118193
[lbry] claimtrie: update CLI to support other tools
- fix default app dir name
- enable debug level for cli
- block sub command to output hash, height, or both
2022-05-26 21:52:15 -07:00
Roy Lee
e48200f53a [lbry] wire: limit the blocks of getdata message
In the cuurent codebase, OnGetData() handler penalizes / ban peers
requesting large blocks.

  server.go:
  @@ -649,7 +649,7 @@ func (sp *serverPeer) OnGetData(_ *peer.Peer, msg *wire.MsgGetData) {
          // bursts of small requests are not penalized as that would potentially ban
          // peers performing IBD.
          // This incremental score decays each minute to half of its value.
          if sp.addBanScore(0, uint32(length)*99/wire.MaxInvPerMsg, "getdata") {
                  return
          }

This accidentally penalize nodes trying to catch up checkpoints whose
'getdata' requests would be as large as the wire.MaxInvPerMsg, and get
banned very soon.

This patch limit getdata request to wire.MaxInvPerMsg/99 blocks.
2022-05-25 22:00:38 -07:00
Roy Lee
4a8d390a06
[lbry] ci: GoReleaser zero out buildid for reproducible builds (#40) 2022-05-25 21:49:12 -07:00
Roy Lee
aef4e45bd7 [lbry] ci: add github workflows 2022-05-24 02:49:08 -07:00
Roy Lee
0375a6d38b [lbry] ci: support GoReleaser 2022-05-24 02:35:27 -07:00
Roy Lee
2cfa235a33 [lbry] ci: update Dockerfile
Use $ARCH/debian:bullseye-20220418-slim as base image
2022-05-24 00:47:34 -07:00
Roy Lee
7ee3d7d26b [lbry] ci: add .golangci-lint.yml 2022-05-24 00:47:34 -07:00
Roy Lee
d4cddda35c [lbry] ci: update gocelan.sh 2022-05-24 00:41:56 -07:00
Roy Lee
76e482bb73 [lbry] ci: remove release/release.sh 2022-05-24 00:39:44 -07:00
Roy Lee
badb894e3a [lbry] ci: update .gitignore 2022-05-24 00:39:44 -07:00
Roy Lee
3cb961257c [lbry] ci: fixed various lint errors 2022-05-24 00:39:44 -07:00
Roy Lee
bf7a513006 [lbry] go mod: update go modules 2022-05-24 00:04:19 -07:00
Roy Lee
7c5a2c6f58 [lbry] version: update codebase to use version package 2022-05-24 00:04:19 -07:00
Roy Lee
3662f316ab [lbry] version: add version package 2022-05-24 00:04:19 -07:00
Roy Lee
43d3086ce1 [lbry] mempool: update getrawmempool and implement getmempoolentry
TODO::
1. Populate Ancestor and decsendent related fields instead of mocking.
2. Move and refator the implementation of getmempoolentry to the mempool
   package.
2022-05-24 00:04:19 -07:00
Roy Lee
7513046f70 [lbry] fees: replace estimatefee with esimatesmartfee 2022-05-24 00:04:19 -07:00
Roy Lee
d126d0c10e [lbry] fees: port estimatesmartfee from DCRD
1. logger
2. blockheight: int64 -> int32
3. dcrutil -> lbcutl
4. MaxConfirimation: 42
5. MinBucketFee: mempool.MinRelayFee (default 1000)
6. BucketFee Spacing: 1.1 -> 1.05

Note:
  DCRD implementation of estimatesmartfee is based on bitcoin core 0.14
  Lbrycrd (0.17) includes the updates of bitcoin core 0.15.
  They are slightly different, but shouldn't matter much.
2022-05-24 00:04:19 -07:00
Roy Lee
324c443c64 [lbry] fees: initial import from DCRD
vendored https://github.com/decred/dcrd/tree/master/internal/fees

Commit of the last modification

    commit a6e205b88fbb44f7ee85be25a81f4dad155670d8
    Author: Dave Collins <davec@conformal.com>
    Date:   Sat Dec 26 12:17:48 2020 -0600

        fees: Remove deprecated DisableLog.
2022-05-24 00:04:19 -07:00
Roy Lee
d99883a620 [lbry] btcjson: take integers for boolean parameters.
This is for backward compatibility with lbrycrd/bitcoind where some clients
use intger values (0/1) for boolean.
2022-05-24 00:04:19 -07:00
Roy Lee
a7f971f404 [lbry] rpc: update getrawtransaction to take verbose as boolean 2022-05-24 00:04:19 -07:00
Jeffrey Picard
239d681f28 [lbry] contrib: add linode deployment using docker 2022-05-24 00:04:19 -07:00
Roy Lee
d35a82412f [lbry] align port settings between lbcd, lbcctl, and lbcwallet 2022-05-24 00:01:46 -07:00
Brannon King
2bd6e4c3a9 [lbry] ffldb: increase open file limit and flush more often 2022-05-24 00:01:46 -07:00
Brannon King
b4623ef2dd [lbry] increase open file limit to 2048 2022-05-24 00:01:46 -07:00
Brannon King
4dd4505706 [lbry] docs: update docs for LBRY
Co-authored-by: Roy Lee <roylee17@gmail.com>
2022-05-24 00:01:46 -07:00
Brannon King
1b823c055f [lbry] test: don't remove old regression DB 2022-05-24 00:01:46 -07:00
Brannon King
a07bb527df [lbry] test: fixed all current tests and delete three.
Co-authored-by: Roy Lee <roylee17@gmail.com>
2022-05-24 00:01:46 -07:00
Brannon King
d6a6b53551 [lbry] upnp: brought in upnp fix from dcrd 2022-05-24 00:01:46 -07:00
Jonathan Moody
fe1637c223 [lbry] config: Verify completeness of sample-lbcd.conf using reflection on config struct. 2022-05-24 00:01:46 -07:00
Jonathan Moody
6c2a3d8bcf [lbry] config: Embed sample-lbcd.conf contents at build time.
Use embedded config if the sample-lbcd.conf is not found at runtime.
2022-05-24 00:01:46 -07:00
Jonathan Moody
2add30af9a [lbry] config: Add a number of missing options to sample-lbcd.conf.
Correct "blacklist is applied before the blacklist" typo in help text.
2022-05-24 00:01:46 -07:00
Roy Lee
b8b2bd1584 [lbry] config: enable txindex by default 2022-05-24 00:01:46 -07:00
Brannon King
f3e1c96de9 [lbry] config: enable upnp by default 2022-05-24 00:01:46 -07:00
Brannon King
0a0e79bc41 [lbry] enable segwit 2022-05-24 00:01:45 -07:00
Brannon King
023aa5d6b0 [lbry] btcjson: added optional address type for getnewaddress 2022-05-24 00:01:45 -07:00
Brannon King
de2a548207 [lbry] btcjson: ladded claim related fields for wallet 2022-05-24 00:01:45 -07:00
Roy Lee
568544961f [lbry] rpcserver: log the reason of submitblock rejection 2022-05-24 00:01:45 -07:00
Brannon King
8c984993a8 [lbry] rpcserver: made invalidate/reconsiderBlock return RPC errors 2022-05-24 00:01:45 -07:00
Brannon King
6c0360fa42 [lbry] rpcserver: made estimatesmartfee call estimatefee (for now) 2022-05-24 00:01:45 -07:00
Roy Lee
d20a2e53b4 [lbry] mining: return witness_script instead of raw witness_commitment in GBT 2022-05-24 00:01:45 -07:00
Roy Lee
29f64f9dcf [lbry] mining: enlarge updateHash channel buffers 2022-05-24 00:01:45 -07:00
Brannon King
e0870db24e [lbry] mining: calculate claimtrie root hash for generate RPC 2022-05-24 00:01:45 -07:00
Brannon King
6784830246 [lbry] blockchain: clear statusValid upon statusValidateFailed is set
The status management of index does need some refactoring.
For now, we just manually clear the statusValid in every occurance
of statusValidateFailed being set.

Co-authored-by: Roy Lee <roylee17@gmail.com>
2022-05-24 00:01:45 -07:00
Brannon King
405897fa38 [lbry] blockchain: fix crash on unlock generate/invalidate loop 2022-05-24 00:01:45 -07:00
Roy Lee
64884458f9 [lbry] rpc, mining: fix generatetoaddress 2022-05-24 00:01:45 -07:00
Alex Grintsvayg
5537ebbf0c [lbry] rpc: add GetChainTips rpc command 2022-05-24 00:01:45 -07:00
Brannon King
1ea849d509 [lbry] rpc: added getchaintips RPC
remove btcjson dep in chainquery
2022-05-24 00:01:45 -07:00
Brannon King
73d8f4762f [lbry] rpc: import invalidate/reconsiderblock from bchd 2022-05-24 00:01:45 -07:00
Brannon King
81862c664e [lbry] rpc: import getnetworkinfo from bchd 2022-05-24 00:01:45 -07:00
Brannon King
5116f45617 [lbry] rpc: fix getblock reponse 2022-05-24 00:01:45 -07:00
Brannon King
3d8f36a505 [lbry] rpc: output segwit rule 2022-05-24 00:01:45 -07:00
Roy Lee
096dd3ff75 [lbry] rpcclient: fix http response resource leaking 2022-05-24 00:01:02 -07:00
Roy Lee
fb3ef35189 [lbry] rpcclient: support SkipVerify of TLS certificate. (#39) 2022-05-23 23:53:30 -07:00
Roy Lee
3111601ac9 [lbry] rpcclient: add a blocknotify example using lbcd websocket 2022-05-23 23:53:30 -07:00
Brannon King
e7d8637cc5 [lbry] rpcclient: update defaultMaxFeeRate from 0.1 LBC to 0.5 LBC 2022-05-23 23:53:30 -07:00
Brannon King
9d70ff6f6d [lbry] rpcserver: add ClaimTrie root hash to GetBlockTemplate() 2022-05-23 23:53:30 -07:00
Brannon King
6834591d52 [lbry] rpc: support claim related methods 2022-05-23 23:53:30 -07:00
Brannon King
0c5f94420a [lbry] print out memory usage periodically 2022-05-23 23:53:30 -07:00
Roy Lee
6828cf5e36 [lbry] claimtrie: import current snapshot
Sync to tip

Co-authored-by: Brannon King <countprimes@gmail.com>
2022-05-23 23:53:30 -07:00
Roy Lee
45627c7a6a [lbry] rename btcd to lbcd
Co-authored-by: Brannon King <countprimes@gmail.com>
2022-05-23 23:53:30 -07:00
Roy Lee
aae5b24bb0 [lbry] blockchain: connect to ClaimTrie
Co-authored-by: Brannon King <countprimes@gmail.com>
2022-05-23 23:53:30 -07:00
Brannon King
ba22414cc1 [lbry] log: support claimtrie entries 2022-05-23 23:53:30 -07:00
Brannon King
78d780263b [lbry] txscript: remove claim prefix for addr calculation 2022-05-23 23:53:30 -07:00
Roy Lee
61a18152e9 [lbry] txscript: recognize LBRY claim script OPCODES 2022-05-23 23:53:30 -07:00
Roy Lee
2b28dfa528 [lbry] txscript: introduce claim script
Co-authored-by: Brannon King <countprimes@gmail.com>
2022-05-23 23:53:30 -07:00
Roy Lee
7657419b22 [lbry] txscript: change MaxScriptSize from 10,000 to 20,005 2022-05-23 23:53:30 -07:00
Alex Grintsvayg
33328a3e93 [lbry] server: make uptime rpc return a real uptime 2022-05-23 23:53:30 -07:00
Brannon King
44dd82a9f5 [lbry] server: don't ban peers on tx-not-in-block behavior 2022-05-23 23:53:30 -07:00
Roy Lee
93481d7f3a [lbry] server: update client version to /btcwire:0.5.0/LBRY.GO:0.12.2/
TODO: double check if lbryd bumps the version.
2022-05-23 23:53:30 -07:00
Brannon King
4eb4dfa670 [lbry] blockchain: Consider a block with timestamp less 6 hours 'current' 2022-05-23 23:53:30 -07:00
Brannon King
8ff0f3787e [lbry] blockchain: support force active fork deployment 2022-05-23 23:53:29 -07:00
Roy Lee
4a987b068d [lbry] blockchain, mempool: validate txscripts
Co-authored-by: Brannon King <countprimes@gmail.com>
2022-05-23 23:53:29 -07:00
Roy Lee
767a375816 [lbry] blockchain: change Block Subsidy algorithm 2022-05-23 23:53:29 -07:00
Roy Lee
e63ede0311 [lbry] blockchain: change the difficulty adjustment algorithm.
adjusted := target + (actual - target) / 8

  max := target + (target / 2)
  min := target - (target / 8)

  if adjusted > max {
    adjusted = max
  } else if adj < min {
    adjusted = min
  }

  diffculty := lastDifficulty * adjusted / target
2022-05-23 23:53:29 -07:00
Roy Lee
4bfd69e23d [lbry] blockchain: make UTXO in Genesis block spendable 2022-05-23 23:53:29 -07:00
Roy Lee
b2d0ae301e [lbry] blockchain, txscript: change maxScriptElementSize from 520 t0 20,000 bytes 2022-05-23 23:53:29 -07:00
Roy Lee
03989a91d9 [lbry] blockchain, wire: verify blockheaders using LBRY PoW 2022-05-23 23:53:29 -07:00
Roy Lee
9f479837c1 [lbry] blockchain: change max block size to 2,000,000 2022-05-23 23:53:29 -07:00
Roy Lee
3dc91f1295 [lbry] blockchain, wire: add ClaimTrie to Block Header 2022-05-23 23:53:29 -07:00
Roy Lee
cbac056756 [lbry] chaincfg: update chainparams for LBRY chain
Co-authored-by: Brannon King <countprimes@gmail.com>
Co-authored-by: Alex Grintsvayg <grin@lbry.com>
2022-05-23 23:53:29 -07:00
Roy Lee
ba3ca3b77e [lbry] chaincfg: setup genisis blocks 2022-05-23 23:53:29 -07:00
Roy Lee
21ad6495b6 [lbry] chaincfg: implement LBRY PoW Hash 2022-05-23 23:53:29 -07:00
Brannon King
dba1eb7261 [lbry] profile: support fgprof (flame graph) 2022-05-23 23:53:29 -07:00
Roy Lee
31d3d9debc [lbry] wire: increase wire.MaxBlockPayload to 8MB 2022-05-23 23:53:29 -07:00
Roy Lee
abe121ea6e [lbry] wire: update protocol NetIDs 2022-05-23 21:46:22 -07:00
Brannon King
2c3c4db198 [lbry] wire: optimize binaryFreeList handling 2022-05-23 21:46:22 -07:00
Tomasz Ziolkowski
a8ad257660 reduce redundant memory allocatio - resolves btcsuite/btcd#1699
Signed-off-by: Tomasz Ziolkowski <tomasz.ziolkowski@allegro.pl>
2022-05-23 21:46:22 -07:00
Calvin Kim
40dab558f6 go.mod, go.sum: Update goleveldb
Goleveldb recently had a PR in where memory allocation was reduced
drastically (github.com/syndtr/goleveldb/pull/367).  Update goleveldb
to use that PR.
2022-05-23 21:46:22 -07:00
Dave Collins
42310f6948 txscript: Make op callbacks take opcode and data.
This converts the callback function defined on the internal opcode
struct to accept the opcode and data slice instead of a parsed opcode as
the final step towards removing the parsed opcode struct and associated
supporting code altogether.

It also updates all of the callbacks and tests accordingly and finally
removes the now unused parsedOpcode struct.

The final results for the raw script analysis and tokenizer
optimizations are as follows:

benchmark                                       old ns/op     new ns/op     delta
BenchmarkIsPayToScriptHash-8                    62393         0.51          -100.00%
BenchmarkIsPubKeyHashScript-8                   62228         0.56          -100.00%
BenchmarkGetSigOpCount-8                        61051         658           -98.92%
BenchmarkExtractPkScriptAddrsLarge-8            60713         17.2          -99.97%
BenchmarkExtractPkScriptAddrs-8                 289           17.9          -93.81%
BenchmarkIsWitnessPubKeyHash-8                  61688         0.42          -100.00%
BenchmarkIsUnspendable-8                        656           520           -20.73%
BenchmarkExtractAtomicSwapDataPushesLarge-8     61332         44.0          -99.93%
BenchmarkExtractAtomicSwapDataPushes-8          990           260           -73.74%
BenchmarkDisasmString-8                         102902        39754         -61.37%
BenchmarkGetPreciseSigOpCount-8                 130223        715           -99.45%
BenchmarkScriptParsing-8                        63464         681           -98.93%
BenchmarkIsMultisigScriptLarge-8                64166         5.83          -99.99%
BenchmarkIsMultisigScript-8                     630           58.5          -90.71%
BenchmarkPushedData-8                           64837         1779          -97.26%
BenchmarkCalcSigHash-8                          3627895       3605459       -0.62%
BenchmarkIsPubKeyScript-8                       62323         2.83          -100.00%
BenchmarkIsPushOnlyScript-8                     62412         569           -99.09%
BenchmarkIsWitnessScriptHash-8                  61243         0.56          -100.00%
BenchmarkGetScriptClass-8                       61515         16.4          -99.97%
BenchmarkIsNullDataScript-8                     62495         2.53          -100.00%
BenchmarkIsMultisigSigScriptLarge-8             69328         2.52          -100.00%
BenchmarkIsMultisigSigScript-8                  2375          141           -94.06%
BenchmarkGetWitnessSigOpCountP2WKH-8            504           72.0          -85.71%
BenchmarkGetWitnessSigOpCountNested-8           1158          136           -88.26%
BenchmarkIsWitnessPubKeyHash-8                  68927         0.53          -100.00%
BenchmarkIsWitnessScriptHash-8                  62774         0.63          -100.00%

benchmark                                       old allocs     new allocs     delta
BenchmarkIsPayToScriptHash-8                    1              0              -100.00%
BenchmarkIsPubKeyHashScript-8                   1              0              -100.00%
BenchmarkGetSigOpCount-8                        1              0              -100.00%
BenchmarkExtractPkScriptAddrsLarge-8            1              0              -100.00%
BenchmarkExtractPkScriptAddrs-8                 1              0              -100.00%
BenchmarkIsWitnessPubKeyHash-8                  1              0              -100.00%
BenchmarkIsUnspendable-8                        1              0              -100.00%
BenchmarkExtractAtomicSwapDataPushesLarge-8     1              0              -100.00%
BenchmarkExtractAtomicSwapDataPushes-8          2              1              -50.00%
BenchmarkDisasmString-8                         46             51             +10.87%
BenchmarkGetPreciseSigOpCount-8                 3              0              -100.00%
BenchmarkScriptParsing-8                        1              0              -100.00%
BenchmarkIsMultisigScriptLarge-8                1              0              -100.00%
BenchmarkIsMultisigScript-8                     1              0              -100.00%
BenchmarkPushedData-8                           7              6              -14.29%
BenchmarkCalcSigHash-8                          1335           712            -46.67%
BenchmarkIsPubKeyScript-8                       1              0              -100.00%
BenchmarkIsPushOnlyScript-8                     1              0              -100.00%
BenchmarkIsWitnessScriptHash-8                  1              0              -100.00%
BenchmarkGetScriptClass-8                       1              0              -100.00%
BenchmarkIsNullDataScript-8                     1              0              -100.00%
BenchmarkIsMultisigSigScriptLarge-8             5              0              -100.00%
BenchmarkIsMultisigSigScript-8                  3              0              -100.00%
BenchmarkGetWitnessSigOpCountP2WKH-8            2              0              -100.00%
BenchmarkGetWitnessSigOpCountNested-8           4              0              -100.00%
BenchmarkIsWitnessPubKeyHash-8                  1              0              -100.00%
BenchmarkIsWitnessScriptHash-8                  1              0              -100.00%

benchmark                                       old bytes     new bytes     delta
BenchmarkIsPayToScriptHash-8                    311299        0             -100.00%
BenchmarkIsPubKeyHashScript-8                   311299        0             -100.00%
BenchmarkGetSigOpCount-8                        311299        0             -100.00%
BenchmarkExtractPkScriptAddrsLarge-8            311299        0             -100.00%
BenchmarkExtractPkScriptAddrs-8                 768           0             -100.00%
BenchmarkIsWitnessPubKeyHash-8                  311299        0             -100.00%
BenchmarkIsUnspendable-8                        1             0             -100.00%
BenchmarkExtractAtomicSwapDataPushesLarge-8     311299        0             -100.00%
BenchmarkExtractAtomicSwapDataPushes-8          3168          96            -96.97%
BenchmarkDisasmString-8                         389324        130552        -66.47%
BenchmarkGetPreciseSigOpCount-8                 623367        0             -100.00%
BenchmarkScriptParsing-8                        311299        0             -100.00%
BenchmarkIsMultisigScriptLarge-8                311299        0             -100.00%
BenchmarkIsMultisigScript-8                     2304          0             -100.00%
BenchmarkPushedData-8                           312816        1520          -99.51%
BenchmarkCalcSigHash-8                          1373812       1290507       -6.06%
BenchmarkIsPubKeyScript-8                       311299        0             -100.00%
BenchmarkIsPushOnlyScript-8                     311299        0             -100.00%
BenchmarkIsWitnessScriptHash-8                  311299        0             -100.00%
BenchmarkGetScriptClass-8                       311299        0             -100.00%
BenchmarkIsNullDataScript-8                     311299        0             -100.00%
BenchmarkIsMultisigSigScriptLarge-8             330035        0             -100.00%
BenchmarkIsMultisigSigScript-8                  9472          0             -100.00%
BenchmarkGetWitnessSigOpCountP2WKH-8            1408          0             -100.00%
BenchmarkGetWitnessSigOpCountNested-8           3200          0             -100.00%
BenchmarkIsWitnessPubKeyHash-8                  311299        0             -100.00%
BenchmarkIsWitnessScriptHash-8                  311299        0             -100.00%
2022-05-23 21:46:22 -07:00
Dave Collins
7358671e83 txscript: Make executeOpcode take opcode and data.
This converts the executeOpcode function defined on the engine to accept
an opcode and data slice instead of a parsed opcode as a step towards
removing the parsed opcode struct and associated supporting code altogether.

It also updates all callers accordingly.
2022-05-23 21:46:21 -07:00
Dave Collins
6f3f4c1b8c txscript: Remove unused parseScriptTemplate func.
Also remove tests associated with the func accordingly.
2022-05-23 21:46:21 -07:00
Dave Collins
4bbfd2413c txscript: Remove unused parsedOpcode.bytes func. 2022-05-23 21:46:21 -07:00
Dave Collins
849873675f txscript: Remove unused unparseScript func.
Also remove tests associated with unparsing opcodes accordingly.
2022-05-23 21:46:21 -07:00
Conner Fromknecht
82e02951dc txscript: Remove unused calcWitnessSignatureHash 2022-05-23 21:46:21 -07:00
Dave Collins
08a53b25fb txscript: Remove unused parseScript func. 2022-05-23 21:46:21 -07:00
Conner Fromknecht
8588536586 txscript/pkscript: Use finalOpcodeData to extract redeem script 2022-05-23 21:46:21 -07:00
Conner Fromknecht
e84398d21e txscript/sign: Use calcWitnessSigHashRaw for witness sigs 2022-05-23 21:46:21 -07:00
Conner Fromknecht
b40859ff00 txscript: Rename calcSignatureHashRaw 2022-05-23 21:46:21 -07:00
Dave Collins
9d1d6d59a6 txscript: Rename removeOpcodeByDataRaw func.
This renames the removeOpcodeByDataRaw to removeOpcodeByData now that
the old version has been removed.
2022-05-23 21:46:21 -07:00
Dave Collins
64aeab7882 txscript: Remove unused removeOpcodeByData func. 2022-05-23 21:46:21 -07:00
Conner Fromknecht
aa1014c87b txscript: Remove unused isWitnessProgram 2022-05-23 21:46:21 -07:00
Conner Fromknecht
bd07a2580e txscript: Remove unused calcSignatureHash 2022-05-23 21:46:21 -07:00
Dave Collins
b871286f98 txscript: Refactor engine to use raw scripts.
This refactors the script engine to store and step through raw scripts
by making using of the new zero-allocation script tokenizer as opposed
to the less efficient method of storing and stepping through parsed
opcodes.  It also improves several aspects while refactoring such as
optimizing the disassembly trace, showing all scripts in the trace in
the case of execution failure, and providing additional comments
describing the purpose of each field in the engine.

It should be noted that this is a step towards removing the parsed
opcode struct and associated supporting code altogether, however, in
order to ease the review process, this retains the struct and all
function signatures for opcode execution which make use of an individual
parsed opcode.  Those will be updated in future commits.

The following is an overview of the changes:

- Modify internal engine scripts slice to use raw scripts instead of
  parsed opcodes
- Introduce a tokenizer to the engine to track the current script
- Remove no longer needed script offset parameter from the engine since
  that is tracked by the tokenizer
- Add an opcode index counter for disassembly purposes to the engine
- Update check for valid program counter to only consider the script
  index
  - Update tests for bad program counter accordingly
- Rework the NewEngine function
  - Store the raw scripts
  - Setup the initial tokenizer
  - Explicitly check against version 0 instead of DefaultScriptVersion
    which would break consensus if changed
  - Check the scripts parse according to version 0 semantics to retain
    current consensus rules
  - Improve comments throughout
- Rework the Step function
  - Use the tokenizer and raw scripts
  - Create a parsed opcode on the fly for now to retain existing
    opcode execution function signatures
  - Improve comments throughout
- Update the Execute function
  - Explicitly check against version 0 instead of DefaultScriptVersion
    which would break consensus if changed
  - Improve the disassembly tracing in the case of error
- Update the CheckErrorCondition function
  - Modify clean stack error message to make sense in all cases
  - Improve the comments
- Update the DisasmPC and DisasmScript functions on the engine
  - Use the tokenizer
  - Optimize construction via the use of strings.Builder
- Modify the subScript function to return the raw script bytes since the
  parsed opcodes are no longer stored
- Update the various signature checking opcodes to use the raw opcode
  data removal and signature hash calculation functions since the
  subscript is now a raw script
  - opcodeCheckSig
  - opcodeCheckMultiSig
  - opcodeCheckSigAlt
2022-05-23 21:46:21 -07:00
Dave Collins
6198f45307 txscript: Convert to use non-parsed opcode disasm.
This converts the engine's current program counter disasembly to make
use of the standalone disassembly function to remove the dependency on
the parsed opcode struct.

It also updates the tests accordingly.
2022-05-23 21:46:21 -07:00
Dave Collins
07e1369839 txscript: Make min push accept raw opcode and data.
This converts the checkMinimalDataPush function defined on a parsed
opcode to a standalone function which accepts an opcode and data slice
instead in order to make it more flexible for raw script analysis.

It also updates all callers accordingly.
2022-05-23 21:46:21 -07:00
Dave Collins
e610deb203 txscript: Make isConditional accept raw opcode.
This converts the isConditional function defined on a parsed opcode to a
standalone function named isOpcodeConditional which accepts an opcode as
a byte instead in order to make it more flexible for raw script
analysis.

It also updates all callers accordingly.
2022-05-23 21:46:21 -07:00
Dave Collins
804327d22c txscript: Make alwaysIllegal accept raw opcode.
This converts the alwaysIllegal function defined on a parsed opcode to a
standalone function named isOpcodeAlwaysIllegal which accepts an opcode
as a byte instead in order to make it more flexible for raw script
analysis.

It also updates all callers accordingly.
2022-05-23 21:46:21 -07:00
Dave Collins
e928eeb5ce txscript: Make isDisabled accept raw opcode.
This converts the isDisabled function defined on a parsed opcode to a
standalone function which accepts an opcode as a byte instead in order
to make it more flexible for raw script analysis.

It also updates all callers accordingly.
2022-05-23 21:46:21 -07:00
Conner Fromknecht
25206b9565 txscript: Use removeOpcodeRaw for CODESEP in calcSigHash 2022-05-23 21:46:21 -07:00
Conner Fromknecht
1e3b85cd60 txscript: Remove unused removeOpcode 2022-05-23 21:46:21 -07:00
Conner Fromknecht
90b8c2cb51 txscript: Optimize removeOpcodeRaw 2022-05-23 21:46:21 -07:00
Dave Collins
f5d78e8b10 txscript: Implement efficient opcode data removal.
This introduces a new function named removeOpcodeByDataRaw which accepts
the raw scripts and data to remove versus requiring the parsed opcodes
to both significantly optimize it as well as make it more flexible for
working with raw scripts.

There are several places in the rest of the code that currently only
have access to the parsed opcodes, so this only introduces the function
for use in the future and deprecates the existing one.

Note that, in practice, the script will never actually contain the data
that is intended to be removed since the function is only used during
signature verification to remove the signature itself which would
require some incredibly non-standard code to create.

Thus, as an optimization, it avoids allocating a new script unless there
is actually a match that needs to be removed.

Finally, it updates the tests to use the new function.
2022-05-23 21:46:21 -07:00
Dave Collins
5283e30bfc txscript: Use raw scripts in SignTxOutput.
This converts SignTxOutput and supporting funcs, namely sign,
mergeScripts and mergeMultiSig, to make use of the new tokenizer as well
as some recently added funcs that deal with raw scripts in order to
remove the reliance on parsed opcodes as a step towards utlimately
removing them altogether and updates the comments to explicitly call out
the script version semantics.

It is worth noting that this has the side effect of optimizing the
function as well, however, since this change is not focused on the
optimization aspects, no benchmarks are provided.
2022-05-23 21:46:21 -07:00
Conner Fromknecht
b8e25c397c txscript: Use optimized calcWitnessSignatureHashRaw w/o parsing 2022-05-23 21:46:21 -07:00
Conner Fromknecht
21ed801540 txscript: Remove unused isWitnessPubKeyHash 2022-05-23 21:46:21 -07:00
Conner Fromknecht
13dbfa3d87 txscript: Introduce calcWitnessSignatureHashRaw 2022-05-23 21:46:21 -07:00
Dave Collins
f8978f5804 txscript: mergeMultiSig function def order cleanup.
This moves the function definition for mergeMultiSig so it is more
consistent with the preferred order used through the codebase.  In
particular, the functions are defined before they're first used and
generally as close as possible to the first use when they're defined in
the same file.
2022-05-23 21:46:21 -07:00
Conner Fromknecht
a3166c8d9b txscript: Optimize ExtractWitnessProgramInfo 2022-05-23 21:46:21 -07:00
Conner Fromknecht
ad01d080d9 txscript: Use internal analysis methods for GetWitnessSigOpCount 2022-05-23 21:46:21 -07:00
Conner Fromknecht
76131529f2 txscript: Return witness version and program in one pass 2022-05-23 21:46:21 -07:00
Conner Fromknecht
1936f28d33 txscript: Optimize IsWitnessProgram 2022-05-23 21:46:21 -07:00
Conner Fromknecht
67168099d3 txscript: Optimize ExtractPkScriptAddr assume non-standard if no success
This completes the process of converting the ExtractPkScriptAddr
function to use the optimized extraction functions recently introduced
as part of the typeOfScript conversion.

In particular, this cleans up the final remaining case for non-standard
transactions. The method now returns NonStandardTy direclty if no other
branch was taken.

The following is a before and after comparison of attempting to extract
pkscript addrs from a very large, non-standard script.

benchmark                                old ns/op     new ns/op     delta
BenchmarkExtractPkScriptAddrsLarge-8     60713         17.0          -99.97%
BenchmarkExtractPkScriptAddrs-8          289           17.0          -94.12%

benchmark                                old allocs     new allocs     delta
BenchmarkExtractPkScriptAddrsLarge-8     1              0              -100.00%
BenchmarkExtractPkScriptAddrs-8          1              0              -100.00%

benchmark                                old bytes     new bytes     delta
BenchmarkExtractPkScriptAddrsLarge-8     311299        0             -100.00%
BenchmarkExtractPkScriptAddrs-8          768           0             -100.00%
2022-05-23 21:46:21 -07:00
Conner Fromknecht
37f9cdd115 txscript: Optimize ExtractPkScriptAddrs witness script hash
This continues the process of converting the ExtractPkScriptAddrs
function to use the optimized extraction functions recently introduced
as part of the typeOfScript conversion.

In particular, this converts the extract of witness-pay-to-script-hash
scripts.
2022-05-23 21:46:21 -07:00
Conner Fromknecht
a3c39034b8 txscript: Optimize ExtractPkScriptAddrs witness pubkey hash
This continues the process of converting the ExtractPkScriptAddrs
function to use the optimized extraction functions recently introduced
as part of the typeOfScript conversion.

In particular, this converts the extraction for witness-pubkey-hash
scripts.
2022-05-23 21:46:21 -07:00
Dave Collins
45bdd26ac3 txscript: Optimize ExtractPkScriptAddrs nulldata.
This continues the process of converting the ExtractPkScriptAddrs
function to use the optimized extraction functions recently introduced
as part of the typeOfScript conversion.

In particular, this converts the detection for nulldata scripts, removes
the slow path fallback code since it is the final case, and modifies the
comment to call out the script version semantics.

The following is a before and after comparison of analyzing both a
typical standard script and a very large non-standard script:

benchmark                            old ns/op    new ns/op    delta
-----------------------------------------------------------------------
BenchmarkExtractPkScriptAddrsLarge   132400       44.4         -99.97%
BenchmarkExtractPkScriptAddrs        1265         231          -81.74%

benchmark                            old allocs   new allocs   delta
-----------------------------------------------------------------------
BenchmarkExtractPkScriptAddrsLarge   1            0            -100.00%
BenchmarkExtractPkScriptAddrs        5            2            -60.00%

benchmark                            old bytes    new bytes    delta
-----------------------------------------------------------------------
BenchmarkExtractPkScriptAddrsLarge   466944       0            -100.00%
BenchmarkExtractPkScriptAddrs        1600         48           -97.00%
2022-05-23 21:46:21 -07:00
Dave Collins
8a4da7690d txscript: Optimize ExtractPkScriptAddrs multisig.
This continues the process of converting the ExtractPkScriptAddrs
function to use the optimized extraction functions recently introduced
as part of the typeOfScript conversion.

In particular, this converts the detection for multisig scripts.

Also, since the remaining slow path cases are all recursive calls,
the parsed opcodes are no longer used, so parsing is removed.
2022-05-23 21:46:21 -07:00
Dave Collins
dcbff6a507 txscript: Optimize ExtractPkScriptAddrs pubkey.
This continues the process of converting the ExtractPkScriptAddrs
function to use the optimized extraction functions recently introduced
as part of the typeOfScript conversion.

In particular, this converts the detection for pay-to-pubkey scripts.
2022-05-23 21:46:21 -07:00
Dave Collins
9b541ad169 txscript: Optimize ExtractPkScriptAddrs pubkeyhash.
This continues the process of converting the ExtractPkScriptAddrs
function to use the optimized extraction functions recently introduced
as part of the typeOfScript conversion.

In particular, this converts the detection for pay-to-pubkey-hash
scripts.
2022-05-23 21:46:21 -07:00
Dave Collins
9dec01adf4 txscript: Optimize ExtractPkScriptAddrs scripthash.
This begins the process of converting the ExtractPkScriptAddrs function
to use the optimized extraction functions recently introduced as part of
the typeOfScript conversion.

In order to ease the review process, the detection of each script type
will be converted in a separate commit such that the script is only
parsed as a fallback for the cases that are not already converted to
more efficient variants.

In particular, this converts the detection for pay-to-script-hash
scripts.
2022-05-23 21:46:21 -07:00
Dave Collins
e4c9d283b5 txscript: Add ExtractPkScriptAddrs benchmarks. 2022-05-23 21:46:21 -07:00
Dave Collins
96c8bc1e93 txscript: Optimize ExtractAtomicSwapDataPushes.
This converts the ExtractAtomicSwapDataPushes function to make use of
the new tokenizer instead of the far less efficient parseScript thereby
significantly optimizing the function.

The new implementation is designed such that it should be fairly easy to
move the function into the atomic swap tools where it more naturally
belongs now that the tokenizer makes it possible to analyze scripts
outside of the txscript module.  Consequently, this also deprecates the
function.

The following is a before and after comparison of attempting to extract
from both a typical atomic swap script and a very large non-atomic swap
script:

benchmark                                       old ns/op     new ns/op     delta
BenchmarkExtractAtomicSwapDataPushesLarge-8     61332         44.4          -99.93%
BenchmarkExtractAtomicSwapDataPushes-8          990           260           -73.74%

benchmark                                       old allocs     new allocs     delta
BenchmarkExtractAtomicSwapDataPushesLarge-8     1              0              -100.00%
BenchmarkExtractAtomicSwapDataPushes-8          2              1              -50.00%

benchmark                                       old bytes     new bytes     delta
BenchmarkExtractAtomicSwapDataPushesLarge-8     311299        0             -100.00%
BenchmarkExtractAtomicSwapDataPushes-8          3168          96            -96.97%
2022-05-23 21:46:21 -07:00
Conner Fromknecht
767dae7adf txscript/scriptnum: add maxscriptnum and maxcltvlength 2022-05-23 21:46:21 -07:00
Dave Collins
76fcfbaa1f txscript: Add ExtractAtomicSwapDataPushes benches. 2022-05-23 21:46:21 -07:00
Dave Collins
ef4e561119 txscript: Make canonicalPush accept raw opcode.
This renames the canonicalPush function to isCanonicalPush and converts
it to accept an opcode as a byte and the associate data as a byte slice
instead of the internal parse opcode data struct in order to make it
more flexible for raw script analysis.

It also updates all callers and tests accordingly.
2022-05-23 21:46:21 -07:00
Dave Collins
afb68c4ea2 txscript: Optimize PushedData.
This converts the PushedData function to make use of the new tokenizer
instead of the far less efficient parseScript thereby significantly
optimizing the function.

Also, the comment is modified to explicitly call out the script version
semantics.

The following is a before and after comparison of extracting the data
from a very large script:

benchmark                 old ns/op     new ns/op     delta
BenchmarkPushedData-8     64837         1790          -97.24%

benchmark                 old allocs     new allocs     delta
BenchmarkPushedData-8     7              6              -14.29%

benchmark                 old bytes     new bytes     delta
BenchmarkPushedData-8     312816        1520          -99.51%
2022-05-23 21:46:21 -07:00
Dave Collins
e063742295 txscript: Add benchmark for PushedData. 2022-05-23 21:46:21 -07:00
Dave Collins
e0cafeb4ca txscript: Optimize CalcMultiSigStats.
This converts the CalcMultiSigStats function to make use of the new
extractMultisigScriptDetails function instead of the far less efficient
parseScript thereby significantly optimizing the function.

The tests are also updated accordingly.

The following is a before and after comparison of analyzing a standard
multisig script:

benchmark                    old ns/op    new ns/op    delta
---------------------------------------------------------------
BenchmarkCalcMultiSigStats   972          79.5         -91.82%

benchmark                    old allocs   new allocs   delta
---------------------------------------------------------------
BenchmarkCalcMultiSigStats   1            0            -100.00%

benchmark                    old bytes    new bytes    delta
---------------------------------------------------------------
BenchmarkCalcMultiSigStats   2304         0            -100.00%
2022-05-23 21:46:21 -07:00
Dave Collins
ea7b0e3816 txscript: Remove unused getSigOpCount function. 2022-05-23 21:46:21 -07:00
Dave Collins
e9a777d84e txscript: Remove unused isPushOnly function. 2022-05-23 21:46:21 -07:00
Dave Collins
ded9b8c506 txscript: Convert CalcScriptInfo.
This converts CalcScriptInfo and dependent expectedInputs to make use of
the new script tokenizer as well as several of the other recently added
raw script analysis functions in order to remove the reliance on parsed
opcodes as a step towards utlimately removing them altogether.

It is worth noting that this has the side effect of significantly
optimizing the function as well, however, since it is deprecated, no
benchmarks are provided.
2022-05-23 21:46:21 -07:00
Conner Fromknecht
0edfee87d6 txscript: Remove unused isWitnessScriptHash 2022-05-23 21:46:21 -07:00
Conner Fromknecht
6e0abb61bd txscript: Optimize typeOfScript for witness-script-hash
This concludes the process of converting the typeOfScript function to
use a combination of raw script analysis and the new tokenizer instead
of the far less efficient parsed opcodes.

In particular, it converts the detection of witness script hash scripts
to use raw script analysis and the new tokenizer.

With all of the limbs now useing optimized variants, the following is a
before an after comparison of calling GetScriptClass on a large script:

benchmark                     old ns/op     new ns/op     delta
BenchmarkGetScriptClass-8     61515         15.3          -99.98%

benchmark                     old allocs     new allocs     delta
BenchmarkGetScriptClass-8     1              0              -100.00%

benchmark                     old bytes     new bytes     delta
BenchmarkGetScriptClass-8     311299        0             -100.00%
2022-05-23 21:46:21 -07:00
Conner Fromknecht
918278a251 txscript: Optimize typeOfScript witness-pubkey-hash
This continues the process of converting the typeOfScript function to
use a combination of raw script analysis and the new tokenizer instead
of the far less efficient parsed opcodes.

In particular, it converts the detection of witness pubkey hash scripts
to use raw script analysis and the new tokenizer.

The following is a before and after comparison of analyzing a large
script:

benchmark                          old ns/op     new ns/op     delta
BenchmarkIsWitnessPubKeyHash-8     61688         62839         +1.87%

benchmark                          old allocs     new allocs     delta
BenchmarkIsWitnessPubKeyHash-8     1              1              +0.00%

benchmark                          old bytes     new bytes     delta
BenchmarkIsWitnessPubKeyHash-8     311299        311299        +0.00%
2022-05-23 21:46:21 -07:00
Dave Collins
9cfaf49024 txscript: Remove unused isNullData function. 2022-05-23 21:46:21 -07:00
Conner Fromknecht
b5e58f4a8d txscript: Optimize typeOfScript for null data scripts
This continues the process of converting the typeOfScript function to
use a combination of raw script analysize and the tokenizer instead of
parsed opcode, with the intent of significanty optimizing the function.

In particular, it converts the detection of null data scripts to use raw
script analysis.
2022-05-23 21:46:21 -07:00
Dave Collins
461335437a txscript: Remove unused isPubkeyHash function. 2022-05-23 21:46:21 -07:00
Dave Collins
c52f41a5af txscript: Optimize typeOfScript pay-to-pubkey-hash.
This continues the process of converting the typeOfScript function to
use a combination of raw script analysis and the new tokenizer instead
of the far less efficient parsed opcodes.

In particular, it converts the detection of pay-to-pubkey-hash scripts
to use raw script analysis.
2022-05-23 21:46:21 -07:00
Dave Collins
cb4018a1d4 txscript: Remove unused isPubkey function. 2022-05-23 21:46:21 -07:00
Conner Fromknecht
b1544cd4a9 txscript: Optimze typeOfScript pay-to-pubkey
This continues the process of converting the typeOfScript function to
use a combination of raw script analysis and the new tokenizer instead
of the face less efficient parsed opcodes, with the intent of
significantly optimizing the function.

In particular, it converts the detection of pay-to-pubkey scripts to use
raw script analysis.
2022-05-23 21:46:21 -07:00
Dave Collins
1f25921172 txscript: Remove unused isMultiSig function. 2022-05-23 21:46:21 -07:00
Dave Collins
cfb6bc9399 txscript: Optimize typeOfScript multisig.
This continues the process of converting the typeOfScript function to
use a combination of raw script analysis and the new tokenizer instead
of the far less efficient parsed opcodes.

In particular, for this commit, since the ability to detect multisig
scripts via the new tokenizer is now available, the function is simply
updated to make use of it.
2022-05-23 21:46:21 -07:00
Dave Collins
c85458bc7c txscript: Remove unused isScriptHash function. 2022-05-23 21:46:21 -07:00
Dave Collins
603c2e3b2b txscript: Optimize typeOfScript pay-to-script-hash.
This begins the process of converting the typeOfScript function to use a
combination of raw script analysis and the new tokenizer instead of the
far less efficient parsed opcodes with the intent of significantly
optimizing the function.

In order to ease the review process, each script type will be converted
in a separate commit and the typeOfScript function will be updated such
that the script is only parsed as a fallback for the cases that are not
already converted to more efficient raw script variants.

In particular, for this commit, since the ability to detect
pay-to-script-hash via raw script analysis is now available, the
function is simply updated to make use of it.
2022-05-23 21:46:21 -07:00
Dave Collins
9ac8abd519 txscript: Make typeOfScript accept raw script.
This converts the typeOfScript function to accept a script version and
raw script instead of an array of internal parsed opcodes in order to
make it more flexible for raw script analysis.

Also, this adds a comment to CalcScriptInfo to call out the specific
version semantics and deprecates the function since nothing currently
uses it, and the relevant information can now be obtained by callers
more directly through the use of the new script tokenizer.

All other callers are updated accordingly.
2022-05-23 21:46:21 -07:00
Dave Collins
aa5a1d1648 txscript: Add benchmark for GetScriptClass. 2022-05-23 21:46:21 -07:00
Conner Fromknecht
43e2a2acb2 txscript: Optimize GetWitnessSigOpCount
This converts the GetWitnessSigOpCount function to use a combination of
raw script analysis and the new tokenizer instead of the far less
efficeint parseScript, thereby significantly optimizing the funciton.

In particular, it use the recently added countSigOpsv0 in precise mode
to avoid calling paseScript.
2022-05-23 21:46:21 -07:00
Conner Fromknecht
83442d60bf txscript: add GetWitnessSigOpCountBenchmarks 2022-05-23 21:46:21 -07:00
Dave Collins
b607be0852 txscript: Optimize GetPreciseSigOpCount.
This converts the GetPreciseSigOpCount function to use a combination of
raw script analysis and the new tokenizer instead of the far less
efficient parseScript thereby significantly optimizing the function.

In particular it uses the recently converted isScriptHashScript,
IsPushOnlyScript, and countSigOpsV0 functions along with the recently
added finalOpcodeData functions.

It also modifies the comment to explicitly call out the script version
semantics.

The following is a before and after comparison of analyzing a large
script:

benchmark                           old ns/op     new ns/op     delta
BenchmarkGetPreciseSigOpCount-8     130223        742           -99.43%

benchmark                           old allocs     new allocs     delta
BenchmarkGetPreciseSigOpCount-8     3              0              -100.00%

benchmark                           old bytes     new bytes     delta
BenchmarkGetPreciseSigOpCount-8     623367        0             -100.00%
2022-05-23 21:46:21 -07:00
Dave Collins
8de1bc3ecc txscript: Add benchmark for GetPreciseSigOpCount. 2022-05-23 21:46:21 -07:00
Dave Collins
873339c5bc txscript: Optimize GetSigOpCount.
This converts the GetSigOpCount function to make use of the new
tokenizer instead of the far less efficient parseScript thereby
significantly optimizing the function.

A new function named countSigOpsV0 which accepts the raw script is
introduced to perform the bulk of the work so it can be reused for
precise signature operation counting as well in a later commit.  It
retains the same semantics in terms of counting the number of signature
operations either up to the first parse error or the end of the script
in the case it parses successfully as required by consensus.

Finally, this also deprecates the getSigOpCount function that requires
opcodes in favor of the new function and modifies the comment on
GetSigOpCount to explicitly call out the script version semantics.

The following is a before and after comparison of analyzing a large
script:

benchmark                    old ns/op     new ns/op     delta
BenchmarkGetSigOpCount-8     61051         677           -98.89%

benchmark                    old allocs     new allocs     delta
BenchmarkGetSigOpCount-8     1              0              -100.00%

benchmark                    old bytes     new bytes     delta
BenchmarkGetSigOpCount-8     311299        0             -100.00%
2022-05-23 21:46:21 -07:00
Dave Collins
c7218b2622 txscript: Add benchmark for GetSigOpCount. 2022-05-23 21:46:21 -07:00
Conner Fromknecht
f1ab6cc7cb txscript/engine: Check ps2h push before parsing script
This moves the check for non push-only pay-to-script-hash signature
scripts before the script parsing logic when creating a new engine
instance to avoid the extra overhead in the error case.
2022-05-23 21:46:21 -07:00
Conner Fromknecht
2674b2926b txscript/engine: Use optimized isScriptHashScript 2022-05-23 21:46:21 -07:00
Conner Fromknecht
c0b2b10241 txscript/engine: Use optimized IsPushOnlyScript 2022-05-23 21:46:21 -07:00
Conner Fromknecht
1814b48565 txscript/engine: Optimize new engine push only script
This modifies the check for whether or not a pay-to-script-hash
signature script is a push only script to make use of the new and more
efficient raw script function.

Also, since the script will have already been checked further above when
the ScriptVerifySigPushOnly flags is set, avoid checking it again in
that case.

Backport of af67951b9a66df3aac1bf3d6376af0730287bbf2
2022-05-23 21:46:21 -07:00
Dave Collins
814f0bae89 txscript: Optimize IsUnspendable.
This converts the IsUnspendable function to make use of a combination of
raw script analysis and the new tokenizer instead of the far less
efficient parseScript thereby significantly optimizing the function.

It is important to note that this new implementation intentionally has a
semantic difference from the existing implementation in that it will now
report scripts that are larger than the max allowed script size are
unspendable as well.

Finally, the comment is modified to explicitly call out the script
version semantics.

Note: this function was recently optimized in master, so the gains here
are less noticable than other optimizations.

The following is a before and after comparison of analyzing a large
script:

benchmark                    old ns/op     new ns/op     delta
BenchmarkIsUnspendable-8     656           584           -10.98%

benchmark                    old allocs     new allocs     delta
BenchmarkIsUnspendable-8     1              0              -100.00%

benchmark                    old bytes     new bytes     delta
BenchmarkIsUnspendable-8     1             0             -100.00%
2022-05-23 21:46:21 -07:00
Dave Collins
90e7a42585 txscript: Add benchmark for IsUnspendable. 2022-05-23 21:46:21 -07:00
Conner Fromknecht
b4f144ad8c txscript: Optimize IsNullData
This converts the IsNullData function to analyze the raw script instead
of using the far less efficient parseScript, thereby significantly
optimizing the function.

The following is a before and after comparison of analyzing a large
script:

benchmark                       old ns/op     new ns/op     delta
BenchmarkIsNullDataScript-8     62495         2.65          -100.00%

benchmark                       old allocs     new allocs     delta
BenchmarkIsNullDataScript-8     1              0              -100.00%

benchmark                       old bytes     new bytes     delta
BenchmarkIsNullDataScript-8     311299        0             -100.00%
2022-05-23 21:46:21 -07:00
Dave Collins
4878db49cf txscript: Add benchmark for IsNullData 2022-05-23 21:46:21 -07:00
Conner Fromknecht
44c6be3d4e txscript: Optimize IsPayToWitnessScriptHash
This converts the IsPayToWitnessScriptHash function to analyze the raw
script instead of using the far less efficient parseScript, thereby
significantly optimizing the function.

In order to accomplish this, it introduces two new functions. The first
one is named extractWitnessScriptHash and works with the raw script byte
to simultaneously deteremine if the script is a p2wsh script, and in the
case that is is, extract and return the hash. The second new function is
named isWitnessScriptHashScript and is defined in terms of the former.

The extract function approach was chosed because it is common for
callers to want to only extract relevant details from a script if the
script is of the specific type. Extracting those details requires
performing the exact same checks to ensure the script is of the correct
type, so it is more efficient to combine the two into one and define the
type determination in terms of the result, so long as the extraction
does not require allocations.

Finally, this also deprecates the isWitnessScriptHash function that
requires opcodes in favor of the new functions and modifies the comment
on IsPayToWitnessScriptHash to call out the script version semantics.

The following is a before and after comparison of executing
IsPayToWitnessScriptHash on a large script:

benchmark                          old ns/op     new ns/op     delta
BenchmarkIsWitnessScriptHash-8     62774         0.63          -100.00%

benchmark                          old allocs     new allocs     delta
BenchmarkIsWitnessScriptHash-8     1              0              -100.00%

benchmark                          old bytes     new bytes     delta
BenchmarkIsWitnessScriptHash-8     311299        0             -100.00%
2022-05-23 21:46:21 -07:00
Conner Fromknecht
b3dd941a77 txscript: Add benchmark for IsPayToWitnessScriptHash 2022-05-23 21:46:21 -07:00
Conner Fromknecht
4d8edfe6d9 txscript: Optimize IsPayToWitnessPubKeyHash
This converts the IsPayToWitnessPubKeyHash function to analyze the raw
script instead of the far less efficient parseScript, thereby
significantly optimizing the function.

In order to accomplish this, it introduces two new functions. The first
one is named extractWitnessPubKeyHash and works with the raw script
bytes to simultaneously deteremine if the script is a p2wkh, and in case
it is, extract and return the hash. The second new function is name
isWitnessPubKeyHashScript which is defined in terms of the former.

The extract function is approach was chosen because it is common for
callers to want to only extract relevant details from the script if the
script is of the specific type. Extracting those details requires the
exact same checks to ensure the script is of the correct type, so it is
more efficient to combine the two and define the type determination in
terms of the result so long as the extraction does not require
allocations.

Finally, this deprecates the isWitnessPubKeyHash function that requires
opcodes in favor of the new functions and modifies the comment on
IsPayToWitnessPubKeyHash to explicitly call out the script version
semantics.

The following is a before and after comparison of executing
IsPayToWitnessPubKeyHash on a large script:

benchmark                          old ns/op     new ns/op     delta
BenchmarkIsWitnessPubKeyHash-8     68927         0.53          -100.00%

benchmark                          old allocs     new allocs     delta
BenchmarkIsWitnessPubKeyHash-8     1              0              -100.00%

benchmark                          old bytes     new bytes     delta
BenchmarkIsWitnessPubKeyHash-8     311299        0             -100.00%
2022-05-23 21:46:21 -07:00
Conner Fromknecht
38ade2c48f txscript: Add benchmark IsPayToWitnessPubkeyHash 2022-05-23 21:46:21 -07:00
Dave Collins
02ddaf29fd txscript: Optimize IsPushOnlyScript.
This converts the IsPushOnlyScript function to make use of the new
tokenizer instead of the far less efficient parseScript thereby
significantly optimizing the function.

It also deprecates the isPushOnly function that requires opcodes in
favor of the new function and modifies the comment on IsPushOnlyScript
to explicitly call out the script version semantics.

The following is a before and after comparison of analyzing a large
script:

benchmark                       old ns/op     new ns/op     delta
BenchmarkIsPushOnlyScript-8     62412         622           -99.00%

benchmark                       old allocs     new allocs     delta
BenchmarkIsPushOnlyScript-8     1              0              -100.00%

benchmark                       old bytes     new bytes     delta
BenchmarkIsPushOnlyScript-8     311299        0             -100.00%
2022-05-23 21:46:21 -07:00
Dave Collins
2b5edd2b5e txscript: Add benchmark for IsPushOnlyScript. 2022-05-23 21:46:21 -07:00
Dave Collins
6be04a8e43 txscript: Optimize IsMultisigSigScript.
This converts the IsMultisigSigScript function to analyze the raw script
and make use of the new tokenizer instead of the far less efficient
parseScript thereby significantly optimizing the function.

In order to accomplish this, it first rejects scripts that can't
possibly fit the bill due to the final byte of what would be the redeem
script not being the appropriate opcode or the overall script not having
enough bytes.  Then, it uses a new function that is introduced named
finalOpcodeData that uses the tokenizer to return any data associated
with the final opcode in the signature script (which will be nil for
non-push opcodes or if the script fails to parse) and analyzes it as if
it were a redeem script when it is non nil.

It is also worth noting that this new implementation intentionally has
the same semantic difference from the existing implementation as the
updated IsMultisigScript function in regards to allowing zero pubkeys
whereas previously it incorrectly required at least one pubkey.

Finally, the comment is modified to explicitly call out the script
version semantics.

The following is a before and after comparison of analyzing a large
script that is not a multisig script and both a 1-of-2 multisig public
key script (which should be false) and a signature script comprised of a
pay-to-script-hash 1-of-2 multisig redeem script (which should be true):

benchmark                               old ns/op     new ns/op     delta
BenchmarkIsMultisigSigScriptLarge-8     69328         2.93          -100.00%
BenchmarkIsMultisigSigScript-8          2375          146           -93.85%

benchmark                               old allocs     new allocs     delta
BenchmarkIsMultisigSigScriptLarge-8     5              0              -100.00%
BenchmarkIsMultisigSigScript-8          3              0              -100.00%

benchmark                               old bytes     new bytes     delta
BenchmarkIsMultisigSigScriptLarge-8     330035        0             -100.00%
BenchmarkIsMultisigSigScript-8          9472          0             -100.00%
2022-05-23 21:46:21 -07:00
Dave Collins
3bdeaa46bf txscript: Add benchmarks for IsMutlsigSigScript. 2022-05-23 21:46:21 -07:00
Dave Collins
67e2cbe374 txscript: Optimize IsMultisigScript.
This converts the IsMultisigScript function to make use of the new
tokenizer instead of the far less efficient parseScript thereby
significantly optimizing the function.

In order to accomplish this, it introduces two new functions.  The first
one is named extractMultisigScriptDetails and works with the raw script
bytes to simultaneously determine if the script is a multisignature
script, and in the case it is, extract and return the relevant details.
The second new function is named isMultisigScript and is defined in
terms of the former.

The extract function accepts the script version, raw script bytes, and a
flag to determine whether or not the public keys should also be
extracted.  The flag is provided because extracting pubkeys results in
an allocation that the caller might wish to avoid.

The extract function approach was chosen because it is common for
callers to want to only extract relevant details from a script if the
script is of the specific type.  Extracting those details requires
performing the exact same checks to ensure the script is of the correct
type, so it is more efficient to combine the two into one and define the
type determination in terms of the result so long as the extraction does
not require allocations.

It is important to note that this new implementation intentionally has a
semantic difference from the existing implementation in that it will now
correctly identify a multisig script with zero pubkeys whereas
previously it incorrectly required at least one pubkey.  This change is
acceptable because the function only deals with standardness rather than
consensus rules.

Finally, this also deprecates the isMultiSig function that requires
opcodes in favor of the new functions and deprecates the error return on
the export IsMultisigScript function since it really does not make sense
given the purpose of the function.

The following is a before and after comparison of analyzing both a large
script that is not a multisig script and a 1-of-2 multisig public key
script:

benchmark                            old ns/op     new ns/op     delta
BenchmarkIsMultisigScriptLarge-8     64166         5.52          -99.99%
BenchmarkIsMultisigScript-8          630           59.4          -90.57%

benchmark                            old allocs     new allocs     delta
BenchmarkIsMultisigScriptLarge-8     1              0              -100.00%
BenchmarkIsMultisigScript-8          1              0              -100.00%

benchmark                            old bytes     new bytes     delta
BenchmarkIsMultisigScriptLarge-8     311299        0             -100.00%
BenchmarkIsMultisigScript-8          2304          0             -100.00%
2022-05-23 21:46:21 -07:00
Dave Collins
a6244b516d txscript: Add benchmarks for IsMutlsigScript. 2022-05-23 21:46:21 -07:00
Dave Collins
69aefa65e6 txscript: Optimize IsPayToScriptHash.
This converts the IsPayToScriptHash function to analyze the raw script
instead of using the far less efficient parseScript thereby
significantly optimizing the function.

In order to accomplish this, it introduces two new functions.  The first
one is named extractScriptHash and works with the raw script bytes to
simultaneously determine if the script is a p2sh script, and in the case
it is, extract and return the hash.  The second new function is named
isScriptHashScript and is defined in terms of the former.

The extract function approach was chosen because it is common for
callers to want to only extract relevant details from a script if the
script is of the specific type.  Extracting those details requires
performing the exact same checks to ensure the script is of the correct
type, so it is more efficient to combine the two into one and define the
type determination in terms of the result so long as the extraction does
not require allocations.

Finally, this also deprecates the isScriptHash function that requires
opcodes in favor of the new functions and modifies the comment on
IsPayToScriptHash to explicitly call out the script version semantics.

The following is a before and after comparison of analyzing a large
script that is not a p2sh script:

benchmark                        old ns/op     new ns/op     delta
BenchmarkIsPayToScriptHash-8     62393         0.60          -100.00%

benchmark                        old allocs     new allocs     delta
BenchmarkIsPayToScriptHash-8     1              0              -100.00%

benchmark                        old bytes     new bytes     delta
BenchmarkIsPayToScriptHash-8     311299        0             -100.00%
2022-05-23 21:46:21 -07:00
Dave Collins
e7228a2e5f txscript: Add benchmark for IsPayToScriptHash. 2022-05-23 21:46:21 -07:00
Conner Fromknecht
eb03d84098 txscript: Optimize IsPayToPubKeyHash
This converts the IsPayToPubKeyHash function to analyze the raw script
instead of using the far less efficient parseScript, thereby
significantly optimization the function.

In order to accomplish this, it introduces two new functions.  The first
one is named extractPubKeyHash and works with the raw script bytes
to simultaneously determine if the script is a pay-to-pubkey-hash script,
and in the case it is, extract and return the hash.  The second new
function is named isPubKeyHashScript and is defined in terms of the
former.

The extract function approach was chosen because it is common for
callers to want to only extract relevant details from a script if the
script is of the specific type.  Extracting those details requires
performing the exact same checks to ensure the script is of the correct
type, so it is more efficient to combine the two into one and define the
type determination in terms of the result so long as the extraction does
not require allocations.

The following is a before and after comparison of analyzing a large
script:

benchmark                         old ns/op     new ns/op     delta
BenchmarkIsPubKeyHashScript-8     62228         0.45          -100.00%

benchmark                         old allocs     new allocs     delta
BenchmarkIsPubKeyHashScript-8     1              0              -100.00%

benchmark                         old bytes     new bytes     delta
BenchmarkIsPubKeyHashScript-8     311299        0             -100.00%
2022-05-23 21:46:21 -07:00
Conner Fromknecht
c4da180d4f txscript: Add benchmark for IsPayToPubKeyHash 2022-05-23 21:46:21 -07:00
Conner Fromknecht
5d18558bc6 txscript: Optimize IsPayToPubKey
This converts the IsPayToScriptHash function to analyze the raw script
instead of using the far less efficient parseScript, thereby
significantly optimizing the function.

In order to accomplish this, it introduces four new functions:
extractCompressedPubKey, extractUncompressedPubKey, extractPubKey, and
isPubKeyScript.  The extractPubKey function makes use of
extractCompressedPubKey and extractUncompressedPubKey to combine their
functionality as a convenience and isPubKeyScript is defined in terms of
extractPubKey.

The extractCompressedPubKey works with the raw script bytes to
simultaneously determine if the script is a pay-to-compressed-pubkey
script, and in the case it is, extract and return the raw compressed
pubkey bytes.

Similarly, the extractUncompressedPubKey works in the same way except it
determines if the script is a pay-to-uncompressed-pubkey script and
returns the raw uncompressed pubkey bytes in the case it is.

The extract function approach was chosen because it is common for
callers to want to only extract relevant details from a script if the
script is of the specific type.  Extracting those details requires
performing the exact same checks to ensure the script is of the correct
type, so it is more efficient to combine the two into one and define the
type determination in terms of the result so long as the extraction does
not require allocations.

The following is a before and after comparison of analyzing a large
script:

benchmark                     old ns/op     new ns/op     delta
BenchmarkIsPubKeyScript-8     62323         2.97          -100.00%

benchmark                     old allocs     new allocs     delta
BenchmarkIsPubKeyScript-8     1              0              -100.00%

benchmark                     old bytes     new bytes     delta
BenchmarkIsPubKeyScript-8     311299        0             -100.00%
2022-05-23 21:46:21 -07:00
Dave Collins
28eaf3492d txscript: Add benchmark for IsPayToPubKey 2022-05-23 21:46:21 -07:00
Dave Collins
8c68575331 txscript: Make asSmallInt accept raw opcode.
This converts the asSmallInt function to accept an opcode as a byte
instead of the internal opcode data struct in order to make it more
flexible for raw script analysis.

It also updates all callers accordingly.
2022-05-23 21:46:21 -07:00
Dave Collins
45de22d457 txscript: Make isSmallInt accept raw opcode.
This converts the isSmallInt function to accept an opcode as a byte
instead of the internal opcode data struct in order to make it more
flexible for raw script analysis.

The comment is modified to explicitly call out the script version
semantics.

Finally, it updates all callers accordingly.
2022-05-23 21:46:21 -07:00
Conner Fromknecht
a02b71bcf9 txscript/reference_test: Convert sighash calc test
This converts the tests for calculating signature hashes to use the
exported function which handles the raw script versus the now deprecated
variant requiring parsed opcodes.

Backport of 06f769ef72e6042e7f2b5ff1c512ef1371d615e5
2022-05-23 21:46:21 -07:00
Dave Collins
18aa1a59df txscript: Optimize CalcSignatureHash.
This modifies the CalcSignatureHash function to make use of the new
signature hash calculation function that accepts raw scripts without
needing to first parse them.  Consequently, it also doubles as a slight
optimization to the execution time and a significant reduction in the
number of allocations.

In order to convert the CalcScriptHash function and keep the same
semantics, a new function named checkScriptParses is introduced which
will quickly determine if a script can be fully parsed without failure
and return the parse failure in the case it can't.

The following is a before and after comparison of analyzing a large
multiple input transaction:

benchmark                  old ns/op     new ns/op     delta
BenchmarkCalcSigHash-8     3627895       3619477       -0.23%

benchmark                  old allocs     new allocs     delta
BenchmarkCalcSigHash-8     1335           801            -40.00%

benchmark                  old bytes     new bytes     delta
BenchmarkCalcSigHash-8     1373812       1293354       -5.86%
2022-05-23 21:46:21 -07:00
Conner Fromknecht
07c1a9343d txscript: Introduce raw script sighash calc func.
This introduces a new function named calcSignatureHashRaw which accepts
the raw script bytes to calculate the script hash versus requiring the
parsed opcode only to unparse them later in order to make it more
flexible for working with raw scripts.

Since there are several places in the rest of the code that currently
only have access to the parsed opcodes, this modifies the existing
calcSignatureHash to first unparse the script before calling the new
function.

Backport of decred/dcrd:f306a72a16eaabfb7054a26f9d9f850b87b00279
2022-05-23 21:46:21 -07:00
Dave Collins
ce08988514 txscript: Optimize script disasm.
This converts the DisasmString function to make use of the new
zero-allocation script tokenizer instead of the far less efficient
parseScript thereby significantly optimizing the function.

In order to facilitate this, the opcode disassembly functionality is
split into a separate function called disasmOpcode that accepts the
opcode struct and data independently as opposed to requiring a parsed
opcode.  The new function also accepts a pointer to a string builder so
the disassembly can be more efficiently be built.

While here, the comment is modified to explicitly call out the script
version semantics.

The following is a before and after comparison of a large script:

benchmark                   old ns/op     new ns/op     delta
BenchmarkDisasmString-8     102902        40124         -61.01%

benchmark                   old allocs     new allocs     delta
BenchmarkDisasmString-8     46             51             +10.87%

benchmark                   old bytes     new bytes     delta
BenchmarkDisasmString-8     389324        130552        -66.47%
2022-05-23 21:46:20 -07:00
Dave Collins
94bb41664b txscript: Add benchmark for DisasmString. 2022-05-23 21:46:20 -07:00
Dave Collins
ac002d6422 txscript: Introduce zero-alloc script tokenizer.
This implements an efficient and zero-allocation script tokenizer that
is exported to both provide a new capability to tokenize scripts to
external consumers of the API as well as to serve as a base for
refactoring the existing highly inefficient internal code.

It is important to note that this tokenizer is intended to be used in
consensus critical code in the future, so it must exactly follow the
existing semantics.

The current script parsing mechanism used throughout the txscript module
is to fully tokenize the scripts into an array of internal parsed
opcodes which are then examined and passed around in order to implement
virtually everything related to scripts.

While that approach does simplify the analysis of certain scripts and
thus provide some nice properties in that regard, it is both extremely
inefficient in many cases, and makes it impossible for external
consumers of the API to implement any form of custom script analysis
without manually implementing a bunch of error prone tokenizing code or,
alternatively, the script engine exposing internal structures.

For example, as shown by profiling the total memory allocations of an
initial sync, the existing script parsing code allocates a total of
around 295.12GB, which equates to around 50% of all allocations
performed.  The zero-alloc tokenizer this introduces will allow that to
be reduced to virtually zero.

The following is a before and after comparison of tokenizing a large
script with a high opcode count using the existing code versus the
tokenizer this introduces for both speed and memory allocations:

benchmark                    old ns/op     new ns/op     delta
BenchmarkScriptParsing-8     63464         677           -98.93%

benchmark                    old allocs     new allocs     delta
BenchmarkScriptParsing-8     1              0              -100.00%

benchmark                    old bytes     new bytes     delta
BenchmarkScriptParsing-8     311299        0             -100.00%

The following is an overview of the changes:

- Introduce new error code ErrUnsupportedScriptVersion
- Implement zero-allocation script tokenizer
- Add a full suite of tests to ensure the tokenizer works as intended
  and follows the required consensus semantics
- Add an example of using the new tokenizer to count the number of
  opcodes in a script
- Update README.md to include the new example
- Update script parsing benchmark to use the new tokenizer
2022-05-23 21:46:20 -07:00
Dave Collins
fc5b1a817c txscript: Add benchmark for script parsing. 2022-05-23 21:46:20 -07:00
Conner Fromknecht
b2784102f4 txscript: Add benchmark for CalcWitnessSigHash 2022-05-23 21:46:20 -07:00
Dave Collins
42f4b4025c txscript: Add benchmark for CalcSignatureHash 2022-05-23 21:46:20 -07:00
3nprob
cc7327c194 rpcclient: Add retry with backoffs to HTTP POST requests
Adds behavior similar to the retries of persistent RPC connections
to HTTP request.

* Initial backoff: 500ms
* Linear increase
* Max retries: 10

Room for future improvement:
* Make configurable
* Add jitter
* Tests for retry behavior
2021-11-16 09:08:07 -05:00
Aarush Bhat
65e986844e Update connmanager_test.go 2021-11-10 07:59:03 -05:00
naveen
31791ba4dc Included permissions for GitHub action
The default GitHub Action is write which is not required for this
action.
2021-10-26 10:00:04 -04:00
pengyonghui
c56a053fdf fix typos 2021-10-26 09:56:57 -04:00
pengyonghui
d590f3f77d fix typo 2021-10-26 09:56:57 -04:00
Jonathan Chappelow
a148fa797a addrmgr: make KnownAddress methods thread-safe
This gives KnownAddress a sync.RWMutex so the exported methods may
safely access the na (*wire.NetAddress) and lastattempt fields.
The AddrManager is updated to lock the new KnownAddress mutex before
assigning to na or lastattempt.
The other KnownAddress fields are only accessed by AddrManager, using
its own Mutex for synchronization.
2021-10-26 09:55:49 -04:00
Olaoluwa Osuntokun
e3449998be
Merge pull request #1752 from Roasbeef/config-disable-stall-handler
peer+server: add new config option to optionally disable stall detection
2021-10-05 11:44:31 -07:00
Olaoluwa Osuntokun
e98a1a1b4c
peer+server: add new config option to optionally disable stall detection
In this commit, we add a new config options that allows one to start
`btcd` in an operating mode that disables the stall detection. This can
be useful in simnet/regtest integration tests settings where it's
important that `btcd` holds on to its possibly sole connection to the
only other node in the test harness.

A new config flag has been added to gate this behavior, which is off by
default.
2021-10-01 14:55:50 -07:00
naveen
4caf037c52 Upgraded the docker version to 1.16
With this changes https://github.com/btcsuite/btcd/pull/1753/ merged in
the docker image also has to be upgraded.
2021-09-17 11:17:29 -04:00
Olaoluwa Osuntokun
bca4298ada
Merge pull request #1753 from Roasbeef/bump-go-version
build: bump min Go version to 1.16.8 add Go 1.17.1
2021-09-16 14:32:45 -07:00
eugene
f8e6854197 mempool: introduce GetDustThreshold to export dust limit calculation
This commit modifies no behavior and would allow other projects to
retrieve the dust limit for a particular output type before the
amount of the output is known. This is particularly useful in the
Lightning Network for channel negotiation.
2021-09-16 15:17:17 -04:00
Olaoluwa Osuntokun
7ae5b74dee
build: bump min Go version to 1.16.8 add Go 1.17.1 2021-09-15 18:19:34 -07:00
Marius van der Wijden
5e6736aad5 btcec: added testcase for point at infinity 2021-09-13 15:59:28 -04:00
Marius van der Wijden
73f7eac903 btcec: check if recovered pk is at point of infinity 2021-09-13 15:59:28 -04:00
JeremyRand
3e2d8464f1
rpcclient: Export symbols needed for custom commands (#1457)
* rpcclient: Export sendCmd and response

This facilitates using custom commands with rpcclient.

See https://github.com/btcsuite/btcd/issues/1083

* rpcclient: Export receiveFuture

This facilitates using custom commands with rpcclient.

See https://github.com/btcsuite/btcd/issues/1083

* rpcclient: Add customcommand example

* rpcclient: remove "Namecoin" from customcommand readme heading
2021-09-02 08:39:55 +02:00
John C. Vernaleo
f9d72f05a4 Switch irc to libera.chat 2021-08-31 07:50:29 -04:00
eugene
f5a1fb9965 mempool: export isDust for use in other projects
This changes isDust to IsDust so other golang projects (btcwallet
or lnd) can use the precise dust calculation used by btcd.
2021-08-03 09:34:49 -04:00
Calvin Kim
b3e6bd6161 rpcserverhelp: Remove extra period for gettxout--synopsis 2021-07-27 10:27:50 -04:00
Anirudha Bose
86a17263b0
Merge pull request #1729 from gnasr/fix-psbtopts-feerate-type
btcjson: Update WalletCreateFundedPsbtOpts.FeeRate type
2021-06-25 21:49:46 +02:00
Gabriel Nasr
505915dc3f btcjson: Update WalletCreateFundedPsbtOpts.FeeRate from *int64 to *float64 2021-06-25 15:23:44 -03:00
Anirudha Bose
63438c6d36 Update release date for v0.22.0-beta in CHANGES file 2021-06-01 13:16:51 -04:00
John C. Vernaleo
aaf19b26f3 btcd: bump version to v0.22.0-beta 2021-06-01 09:36:33 -04:00
Anirudha Bose
418f9204f4 Update CHANGES file for 0.22.0 release 2021-05-26 09:54:22 -04:00
Olaoluwa Osuntokun
ee5896bad5 mempool: add additional test case for inherited RBF replacement
In this commit, we add an additional test case for inherited RBF
replacement. This test case asserts that if a parent is marked as being
replaceable, but the child isn't, then the child can still be replaced
as according to BIP 125 it shoudl _inhreit_ the replaceability of its
parent.

The addition of this test case was prompted by the recently discovered
Bitcoin Core "CVE" [1]. It turns out that bitcoind doesn't properly
implement BIP 125. Namely it fails to allow a child to "inherit"
replaceability if its parent is also replaceable. Our implementation
makes this trait rather explicit due to its recursive implementation.
Kudos to the original implementer @wpaulino for getting this correct.

[1]: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-May/018893.html.
2021-05-13 10:15:27 -04:00
Oliver Gugger
7b6c2b3423 chaincfg: fix deployment bit numbers
On signet all previous soft forks and also taproot are always activated,
meaning the version is always 0x20000000 for all blocks. To make sure
they activate properly in `btcd` we therefore need to use the correct
bit to mask the version.
This means that on any custom signet there would need to be 2016 blocks
mined before SegWit or Taproot can be used.
2021-05-11 15:55:06 -04:00
John C. Vernaleo
0ec4bdc1b8 Don't reference the readme that we don't produce 2021-05-06 18:51:45 -04:00
Olaoluwa Osuntokun
ce697fe7e8
Merge pull request #1716 from halseth/witness-commitment-rpctest
rpctest: add witness commitment when calling CreateBlock
2021-04-29 15:55:35 -07:00
Olaoluwa Osuntokun
7eba688b65
Merge pull request #1692 from guggero/signet
wire+chaincfg: add signet params
2021-04-26 11:01:13 -07:00
Johan T. Halseth
37a6e8485b
rpctest: add witness commitment when calling CreateBlock
If we tried to include transactions having witnesses, the block would be
invalid since the witness commitment was not added.
2021-04-26 13:53:53 +02:00
Johan T. Halseth
f0f4784c1c
mining: extract witness commitment add into method 2021-04-26 13:53:22 +02:00
Oliver Gugger
7d1ab0b4d7
btcctl: add signet param
This commit adds the --signet command line flag to the btcctl utility.
2021-04-22 13:10:45 +02:00
Oliver Gugger
8a62cf0ef5
rpcserver: add taproot deployment to getblockchaininfo 2021-04-22 13:10:45 +02:00
Oliver Gugger
3eac153437
config+params: add signet config option
This commit adds the --signet command line flag (or signet config
option) for starting btcd in signet mode.
2021-04-22 13:10:45 +02:00
Oliver Gugger
73ecb5997b
wire+chaincfg: add signet params
This commit adds all necessary chain parameters for connecting to the
public signet network.
Reference: https://github.com/bitcoin/bitcoin/pull/18267
2021-04-22 13:10:44 +02:00
Aurèle Oulès
2d7825cf70 btcjson: Updated TxRawResult.Version from int32 to uint32 2021-04-13 15:21:09 -04:00
Jake Sylvestre
540786fda6 rpcclient: fix documentation typo 2021-04-13 09:09:20 -04:00
Olaoluwa Osuntokun
36a96f6a00
Merge pull request #1704 from wpaulino/update-btcutil
build: update btcutil dependency
2021-03-31 18:33:23 -07:00
Wilmer Paulino
f133593b93
build: update btcutil dependency 2021-03-29 16:59:44 -07:00
Gustavo Chain
f86ae60936 addrmgr: Use RLock/RUnlock when possible 2021-03-16 13:24:10 -04:00
Olaoluwa Osuntokun
01c6a6fe9b
Merge pull request #1698 from wpaulino/external-peer-testing
peer: allow external testing of peer.Peer
2021-03-12 16:28:45 -08:00
Wilmer Paulino
fdb479f121
peer: allow external testing of peer.Peer
The previous use of allowSelfConns prevented this, as users aren't able
to invoke peer.TstAllowSelfConns themselves due to being part of a test
file, which aren't exported at the library level, leading to a
"disconnecting peer connected to self" error upon establishing a mock
connection between two peers. By including the option at the config
level instead (false by default, prevents connections to self) we enable
users of the peer library to properly test the behavior of the peer.Peer
struct externally.
2021-03-11 18:24:13 -08:00
Gustavo Chain
556620fea6 rpcserver: Fix Error message returned by processRequest
When processRequest can't find a rpc command, standardCmdResult returns
a `btcjson.ErrRPCMethodNotFound` but it gets ignored and a
`btcjson.ErrRPCInvalidRequest` is returned instead.

This makes processRequest return the right error message.
2021-03-09 10:34:32 -05:00
Jake Sylvestre
d08785547a docs: update shields 2021-03-05 07:45:19 -05:00
Appelberg-s
dff2198fc5 Fix error message returned by EstimateFee
When you provide an argument to EstimateFee(numblocks uint32) that exceeds the estimateFeeDepth (which is set to 25), you get an error message that says "can only estimate fees for up to 100 blocks from now".  The variable used in the if condition and the variable used for creating the error message should be the same.
2021-02-09 09:54:33 -05:00
Jake Sylvestre
2a1aa5129e Add Batch JSON-RPC support (rpc client & server) 2021-02-09 09:47:46 -05:00
Anirudha Bose
31b66488b4 btcec: validate R and S signature components in RecoverCompact 2021-02-09 09:43:01 -05:00
Olaoluwa Osuntokun
fa683a69dc
Merge pull request #1689 from cfromknecht/hashcache-flake
txscript/hashcache_test: fix flake due to resetting RNG
2021-02-03 19:32:39 -08:00
Conner Fromknecht
5300a19d06
txscript/hashcache_test: call rand.Seed once in init
This resolves the more fundamental flake in the unit tests noted in the
prior commit.

Because multiple unit tests call rand.Seed in parallel, it's possible
they can be executed with the same unix timestamp (in seconds). If the
second call happens between generating the hash cache and checking that
the cache doesn't contain a random txn, the random transaction is in
fact a duplicate of one generated earlier since the RNG state was reset.

To remedy, we initialize rand.Seed once in the init function.
2021-02-02 13:31:47 -08:00
Conner Fromknecht
1dd693480c
txscript/hashcache_test: always add inputs during getTxn
TestHashCacheAddContainsHashes flakes fairly regularly when rebasing
PR #1684 with:
    txid <txid> wasn't inserted into cache but was found.

With probabilty 1/10^2 there will be no inputs on the transaction. This
reduces the entropy in the txid, and I belive is the primary cause of
the flake.
2021-02-02 12:44:22 -08:00
Dan Cline
77fd96753c txscript: add benchmark for IsUnspendable
- create benchmarks to measure allocations
 - add test for benchmark input
 - create a low alloc parseScriptTemplate
 - refactor parsing logic for a single opcode
2021-02-02 09:20:31 -05:00
Steven Kreuzer
7bbd9b0284 btcjson: Update fields in GetBlockChainInfoResult
Update the fields of GetBlockChainInfoResult to reflect the current state of
the RPC returned by other full-node implementations.

 * InitialBlockDownload - Node is in Initial Block Download mode if True.
 * SizeOnDisk - The estimated size of the block and undo files on disk.
2021-01-26 09:52:38 -05:00
Vinayak Borkar
c3ece697da Fixes btcsuite/btcd#1653 2021-01-18 13:51:45 -05:00
Victor Lavaud
e747eb9284 Add support for arm32v7 in Dockerfile 2021-01-18 13:50:30 -05:00
ebiiim
12abc84cb2 fixed broken link 2021-01-18 13:48:02 -05:00
Olaoluwa Osuntokun
6bd4c64a54
Merge pull request #1670 from breez/sendaddrv2
Add support for receiving sendaddrv2 message from a peer
2020-12-07 19:32:08 -08:00
Yaacov Akiba Slama
29c9ff351c Add support for receiving sendaddrv2 message from a peer 2020-12-02 12:10:43 +02:00
10gic
610bb55ae8 rpcclient: add ExtraHeaders in ConnConfig 2020-11-24 14:15:14 -05:00
Iskander Sharipov
0886f1e5c1 simplify s[:] to s where s is a slice
Found using https://go-critic.github.io/overview#unslice-ref
2020-11-20 15:43:12 -05:00
Olaoluwa Osuntokun
e9c7a5ac64
Merge pull request #1659 from guggero/itest-fixes
integration: optimize harness for better itest control, restore bitcoind compatibility
2020-11-13 16:05:16 -08:00
Oliver Gugger
9e8bb3eddb
btcjson+rpcserverhelp: restore bitcoind compatibility
The PR #1594 introduced a change that made the order of parameters
relevant, if one of them is nil. This makes it harder to be backward
compatible with the same JSON message if an existing parameter in
bitcoind was re-purposed to have a different meaning.
2020-11-12 15:47:51 +01:00
Liran Sharir
9fd26cf795 integration/rpctest: randomizes port in rpctest.New to reduce collisions 2020-11-11 11:37:34 -05:00
Oliver Gugger
65d2b7a18c
integration: allow specifying connection behavior 2020-11-11 14:29:17 +01:00
Oliver Gugger
93cc7f36cf
integration: allow overwriting address generator 2020-11-11 14:24:14 +01:00
Oliver Gugger
9250064837
integration: allow setting custom btcd exe path
To allow using a custom btcd executable, we allow specifying a path to a
file. If the path is empty, the harness will fall back to compiling one
from scratch.
2020-11-11 14:16:08 +01:00
Armando Ochoa
f070f7f2be rpcclient: fix documentation typos 2020-11-04 09:56:02 -05:00
Anirudha Bose
535f25593d rpcclient: implement createwallet with functional options 2020-10-26 14:54:05 -04:00
Anirudha Bose
5e56ca05e1 btcjson: add new JSON-RPC errors and document them 2020-10-26 09:35:46 -04:00
Torkel Rogstad
1d75e0a885 rpcclient: add more wallet commands
Implement backupwallet, dumpwallet, loadwallet and unloadwallet.
2020-10-26 09:34:56 -04:00
David Mazary
6adfc07d1e Unmarshal hashes/second as float in GetMiningInfoResult 2020-10-26 09:33:28 -04:00
Anirudha Bose
6519c04a6f rpcclient: implement gettxoutsetinfo command 2020-10-05 10:03:47 -04:00
Henry Fisher
584c382334 rpc: add signrawtransactionwithwallet interface
Adds interface for issuing a signrawtransactionwithwallet command.
Note that this does not add functionality for the btcd rpc server
itself, it simply assumes that the RPC client has this ability and gives
an API for interacting with the RPC client.

rpc: add signrawtransactionwithwallet interface
2020-10-05 09:56:12 -04:00
Anirudha Bose
0bf42f4476 rpcserver: add txid to getblocktemplate response 2020-10-05 09:55:45 -04:00
Olaoluwa Osuntokun
40ae93587d
Merge pull request #1621 from xplorfin/go1.15
ci: add go 1.15 to tests
2020-10-02 14:52:29 -07:00
Anirudha Bose
e9a51e8dcd rpcclient: implement getwalletinfo command 2020-09-25 12:18:06 -04:00
Friedger Müffke
1340513786 Fix link to using bootstrap.dat 2020-09-23 08:53:11 -04:00
Anirudha Bose
ac3f235eb9 rpcclient: implement getaddressinfo command
Fields such as label, and labelspurpose are not included, since they
are deprecated, and will be removed in Bitcoin Core 0.21.
2020-09-21 09:47:58 -04:00
Elliott Minns
6daaf73544
GetBlockTemplate RPC client implementation (#1629)
* GetBlockTemplate RPC client implementation

* Txid added to the getblocktemplate result

* Omitempty for TxID and improved comment for GetBlockTemplate 'rules' field
2020-09-21 09:42:35 -04:00
Anirudha Bose
f4024160f3 btcjson: add test for null params in searchrawtransactions
Closes PR #1476.
2020-09-21 09:42:09 -04:00
Tristyn
e5521de652 sample-btcd.conf: fix typo 2020-09-17 09:03:52 -04:00
Jake Sylvestre
297c6120bb ci: add go 1.15 to tests 2020-09-17 01:38:18 -04:00
Anirudha Bose
c693bd8bc5 rpcclient: add deriveaddresses RPC command 2020-09-14 10:30:47 -04:00
ipriver
42782bba18 removed unnecessary GOMAXPROCS function calls 2020-09-14 09:57:30 -04:00
Anirudha Bose
ff59bbc14a wire: add proper types for flag field and improve docs
Summary of changes:

- Add a new const TxFlagMarker to indicate the flag prefix byte.
- Add a new TxFlag type to enumerate the flags supported by the
  tx parser.

  This allows us to avoid hardcoded magics, and will make it easier
  to support new flags in future.
- Improve code comments.

Closes #1598.
2020-09-14 09:50:13 -04:00
Andrew Tugarinov
5ae1f21cd9 Added ListSinceBlockMinConfWatchOnly method. 2020-09-14 09:48:40 -04:00
Federico Bond
6f49f1f194 btcjson,rpcclient: add support for PSBT commands to rpcclient 2020-09-14 09:37:27 -04:00
Mikael Lindlof
fff96610aa rpc: Add getnodeaddresses JSON-RPC support
Add NodeAddresses function to rpcserverConnManager
interface for fetching known node addresses.
2020-09-14 09:36:05 -04:00
Olaoluwa Osuntokun
9ef973c282
Merge pull request #1625 from yyforyongyu/fix-notfound-message
btcd+netsync: support witness tx and block in notfound msg
2020-09-08 16:58:19 -07:00
Anirudha Bose
2547246f84 GitHub Actions: Enable Go Race detector and code coverage
This modifies the goclean.sh script to run tests with the
race detector enabled. It also enables code coverage, and
uploads the results to coveralls.io.

Running tests with -race and -cover flags was disabled in
6487ba1 and 6788df7 respectively, due to some limits on
time/goroutines being hit on Travis CI. Since we have
migrated to GitHub Actions, it is desirable to bring them
back.
2020-09-08 10:19:55 -04:00
Mikael Lindlof
eb05726dac Nullable optional JSON-RPC parameters
Fix command marshalling dropping params following params with nil value.

#1591 Allow specifying null parameter value from command line.
2020-09-08 10:01:44 -04:00
Anirudha Bose
bdab8dfe81 chaincfg: Add RegisterHDKeyID func to populate HD key ID pairs
Currently, the only way to register HD version bytes is by initializing
chaincfg.Params struct, and registering it during package init.
RegisterHDKeyID provides a way to populate custom HD version bytes,
without having to create new chaincfg.Params instances. This is useful
for library packages who want to use non-standard version bytes for
serializing extended keys, such as the ones documented in SLIP-0132.

This function is complementary to HDPrivateKeyToPublicKeyID, which is
used to lookup previously registered key IDs.
2020-09-08 09:59:33 -04:00
Anirudha Bose
3b926ef77b btcjson: update ListTransactionsResult for Bitcoin 0.20.0
This only adds new fields as optional, in order to make this change
backwards compatible with older versions of Bitcoin Core.
2020-09-08 09:51:20 -04:00
Calvin Kim
95fea6420c blockchain: Remove unnecessary tx hash 2020-09-08 09:46:04 -04:00
Anirudha Bose
ba3fe57507
rpcclient: support listtransactions RPC with watchonly argument
Co-authored-by: Gert-Jaap Glasbergen <gertjaap@decoscrypto.com>
2020-09-08 09:43:02 -04:00
Hanjun Kim
7cbf95675a btcec: add a comment indicating where curve name taken from
Related with #1565
2020-09-08 09:37:33 -04:00
Hanjun Kim
8facfdd04d btcec: set curve name in CurveParams
Set curve name(secp256k1) in KoblitzCurve.CurveParams

Fixes #1564
2020-09-08 09:37:33 -04:00
yyforyongyu
61634447e7
btcd+netsync: support witness tx and block in notfound msg 2020-09-03 18:53:16 +08:00
Christian Lehmann
23d149cbfb Added symlink to index.md for github readme preview. 2020-09-01 03:35:40 -04:00
Christian Lehmann
355472b0f7 Major rework on documentation to make it compatible to readthedocs.org 2020-09-01 00:48:08 -04:00
Federico Bond
35194e2dac btcjson,wire: fix invalid use of string(x) to convert byte value 2020-08-31 16:01:10 -04:00
Federico Bond
d13e907952 btcd: fix conversion of int to string failing in Go 1.15 2020-08-31 16:01:10 -04:00
Christian Lehmann
90a5c7997c Add Dockerfile to build and run btcd on Docker. 2020-08-31 15:42:42 -04:00
Anirudha Bose
fffe4a909b rpcclient: Implement importmulti JSON-RPC client command 2020-08-31 15:28:48 -04:00
Mikael Lindlof
d2c0123bef Implement signmessagewithprivkey JSON-RPC command
Reuse the Bitcoin message signature header const
also in verifymessage.
2020-08-31 10:12:54 -04:00
Mikael Lindlof
b68c50e33c Add getblockfilter JSON-RPC client command
Add type for second getblockfilter param
2020-08-31 10:02:54 -04:00
Anirudha Bose
7145eef75b rpcserver: add parity with bitcoind for validateaddress
Updated the rpcserver handler for validateaddress JSON-RPC command to
have parity with the bitcoind 0.20.0 interface.

The new fields included are - isscript, iswitness, witness_version, and
witness_program. The scriptPubKey field has been left out since it
requires wallet access.

This update has no impact on the rpcclient.ValidateAddress method,
which uses the btcjson.ValidateAddressWalletResult type for modelling
the response from bitcoind.
2020-08-31 09:58:27 -04:00
wakiyamap
36d4ae08e8 Fix monetary unit 2020-08-31 09:56:19 -04:00
Olaoluwa Osuntokun
7d69fb9ba6 peer: prevent last block height going backwards
This modifies the UpdateLastBlockHeight function to ensure the new
height is after the existing height before updating it in order to
prevent it from going backwards so it properly matches the intent of the
function which is to report the latest known block height for the peer.

Without this change, the value will properly start out at the latest
known block height reported by the peer during version negotiation,
however, it will be set to lower values when syncing from the peer due
to requesting old blocks and blindly updating the height.

It also adds a test to ensure proper functionality.

This is a backport of https://github.com/decred/dcrd/pull/1747
2020-08-31 09:47:41 -04:00
Anirudha Bose
efae8e9967 Add rpclient implementation of getdescriptorinfo RPC 2020-08-31 09:41:49 -04:00
Rjected
70a0132485 blockchain: remove unknown block version warning 2020-08-31 02:42:30 -04:00
John C. Vernaleo
56cc42fe07 btcd: bump version to v0.21.0-beta 2020-08-28 08:13:36 -04:00
Anirudha Bose
4527c5671f Update CHANGES file for 0.21.0 release
Also updated changes for 0.20.1, and added a small note about changes
since 0.12.0.
2020-08-27 15:26:14 -04:00
Dan Cline
2a0d6fd0e3 release: remove old scripts and update process doc
- remove prep_release.sh and notes.sample
- update license in release.sh
- add notes for maintainers on the release process
- mention CHANGES file modifications
2020-08-27 15:14:38 -04:00
Dan Cline
4255e1ed7b release: update release script path 2020-08-27 15:14:38 -04:00
Olaoluwa Osuntokun
1db1b6f821
Merge pull request #1609 from guggero/disable-windows-service
config+service_windows: add flag to disable win service
2020-07-30 16:23:43 -07:00
Javed Khan
24db7d7c0c netsync: handle notfound messages from peers
backport from https://github.com/decred/dcrd/pull/2253

When a peer sends a notfound message, remove the hash from requested
map.  Also increase notfound ban score and return early if it
disconnects the peer.
2020-07-28 09:23:35 -04:00
qqjettkgjzhxmwj
69773a7b41 Update json_rpc_api.md
Corrections suggested by @onyb https://github.com/btcsuite/btcd/pull/1608#discussion_r458363077
2020-07-22 17:12:47 -04:00
qqjettkgjzhxmwj
3c56a6bd3a updated docs for getblock-verbosity fixes 2020-07-22 17:12:47 -04:00
Oliver Gugger
c7390232d3
config+service_windows: add flag to disable win service
To run integration tests with btcd on Windows in non-interactive
environments (such as the Travis build with Windows machines), we
need to make sure we can still spawn a child process instead of only a
windows background service.
2020-07-22 12:57:09 +02:00
Anirudha Bose
d28c7167a5 btcec: Avoid panic in fieldVal.SetByteSlice for large inputs
The implementation has been adapted from the dcrec module in dcrd. The
bug was initially fixed in decred/dcrd@3d9cda1 while transitioning to a
constant time algorithm. A large set of test vectors were subsequently
added in decred/dcrd@8c6b52d.

The function signature has been preserved for backwards compatibility.
This means that returning whether the value has overflowed, and the
corresponding test vectors have not been backported.

This fixes #1170 and closes a previous attempt to fix the bug in #1178.
2020-07-13 09:43:36 -04:00
Javed Khan
875b51c9fb peer: knownInventory, sentNonces - use generic lru
While here, also rename and generalize limitMap and apply to
other maps which need to be bounded.
2020-07-08 16:44:04 -04:00
Anirudha Bose
e2d9cf4b55 rpcclient: Add GetTransactionWatchOnly method 2020-06-29 10:15:10 -04:00
Federico Bond
7b2ff5d180 Add getbalances RPC client command 2020-06-29 10:14:04 -04:00
Torkel Rogstad
e4f59022a3 Add fundrawtransaction RPC call 2020-06-15 09:44:04 -04:00
Mikael Lindlof
73d69f09d0 Add getchaintxstats JSON-RPC client command 2020-06-15 09:42:37 -04:00
adiabat
a383a71670 Add blockchain.NewUtxoEntry() to directly create entries for UtxoViewpoint
The current methods to add to a UtxoViewpoint don't allow for a situation where
we have only UTXO data but not a whole transaction.  This commit allows
contstruction of a UtxoEntry without requiring a full MsgTx.

AddTxOut() and AddTxOuts() both require a whole transaction, including the inputs,
which are only used in order to calculate the txid.  In some situations, such as
with use of the utreexo accumulator, we only have the utxo data but not the
transaction which created it.

For reference, utreexo's initial usage of the blockchain.NewUtxoEntry() function is at
https://github.com/mit-dci/utreexo/pull/135/files#diff-3f7b8f9991ea957f1f4ad9f5a95415f0R96
2020-06-15 09:40:26 -04:00
Mikael Lindlof
b11bf582c5 Improve chain state init efficiency
Remove unnecessary slice of all block indexes and
remove DB iteration over all block indexes that
used to determined the size of the slice.
2020-06-08 12:53:00 -04:00
Torkel Rogstad
714de3f3c7 rpcclient: serialize nil inputs to empty list 2020-06-08 09:52:46 -04:00
JeremyRand
6d521ff8cd rpcclient: Read first line of cookie instead of trimming space 2020-06-08 09:51:09 -04:00
JeremyRand
e6f163e61e rpcclient: Try user+pass auth before cookie auth 2020-06-08 09:51:09 -04:00
JeremyRand
915788b8e6 rpcclient: Refactor cookie caching 2020-06-08 09:51:09 -04:00
JeremyRand
280845a8a4 rpcclient: Add cookie auth
Based on Hugo Landau's cookie auth implementation for Namecoin's ncdns.

Fixes https://github.com/btcsuite/btcd/issues/1054
2020-06-08 09:51:09 -04:00
Olaoluwa Osuntokun
9f0179fd2c
Merge pull request #1577 from wpaulino/getblock-compat
rpcclient: send legacy GetBlock request for backwards compatibility
2020-05-15 16:24:29 -07:00
Olaoluwa Osuntokun
9a88e1dd33
Merge pull request #1575 from dajohi/clean
build: multiple cleanups
2020-05-15 15:47:01 -07:00
Wilmer Paulino
742935e3a9
rpcclient: send legacy GetBlock request for backwards compatibility
Without this, users of this library wouldn't be able to issue GetBlock
requests to nodes which haven't updated to support the latest request
format, namely the use of a single `int` parameter to denote verbosity
instead of two `bool`s.
2020-05-14 18:05:46 -07:00
Henry
d38279ee74
btcjson: change getblock default verbosity to 1
This change makes btcd's getblock command match bitcoind's. Previously
the default verbosity was 0, which caused errors when using the
rpcclient library to connect to a bitcoind node - getblock would
unmarshall incorrectly since it didn't expect a verbosity=1 result when
it did not specify verbosity.
2020-05-14 17:27:59 -07:00
David Hill
f7399e6157 build: clean linter warnings 2020-05-13 08:58:39 -04:00
David Hill
bc8d63bf15 build: update deps 2020-05-13 08:52:05 -04:00
David Hill
a505b99ba3 build: replace travis-ci with github actions.
test go 1.14
use golangci-lint
2020-05-13 08:52:05 -04:00
Dan Cline
b470eee477 btcctl: add regtest mode to btcctl 2020-05-13 08:02:20 -04:00
Antonin Hildebrand
b298415583 Improve error message about non-active segwit on simnet
I started playing with simnet and was confronted with error message:

```
[ERR] FNDG: Unable to broadcast funding tx for ChannelPoint(<point>:0):
-22: TX rejected: transaction <tx> has witness data, but segwit isn't active yet
```

I wasn't aware of the activation period so I got quite puzzled.
Google helped. But I think the message could mention likely cause.

Newly it optionally prints something like:

```
(The threshold for segwit activation is 300 blocks on simnet, current best height is 113)
```
2020-05-13 08:00:49 -04:00
tpkeeper
8512affc59 readme: remove duplicate word 2020-05-06 08:32:44 -04:00
Dan Cline
8b54b0b964 config.go: remove extra quotes 2020-04-14 06:40:20 -05:00
Murray Nesbitt
9f15a7e6af Alphabetize --help output; add missing options to doc.go 2020-04-14 05:09:43 -05:00
Torkel Rogstad
57d44d022e Try both TX serialization formats 2020-03-27 16:59:23 -04:00
Iskander Sharipov
08b8751559 cmd/btcctl: use regexp.MustCompile for constant patterns
Found using https://go-critic.github.io/overview#regexpMust-ref
2020-03-26 09:54:27 -04:00
Ivan Kuznetsov
cfcf4fb762 Implement 'getblockstats' JSON-RPC command 2020-03-25 05:51:42 -04:00
Torkel Rogstad
8b1be46463 Add generatetoaddress and estimatesmartfee RPCs 2020-03-17 09:29:41 -04:00
Jake Sylvestre
a8eadd2ce4 update GetMempoolEntryResult to v0.19.0
https://bitcoincore.org/en/doc/0.19.0/rpc/blockchain/getmempoolentry/
2020-03-16 10:26:15 -04:00
Iskander Sharipov
d9ce6b037f btcjson,rpcclient: use proper Deprecated comment format
This makes godoc and other Go tools understand deprecation notice.

Found using https://go-critic.github.io/overview#deprecatedComment-ref
2020-03-10 11:00:24 -04:00
Jake Sylvestre
c4f39996ac
Refactor GetBlockVerboseTx to reflect correct getblock RPC call… (#1529)
Refactor GetBlockVerboseResult into two separate types: one type for getblock "hash" verbosity=1,
and a second type for getblock "hash" verbosity=2. This is necessary due to how getblock returns
a block's transaction data based on the provided verbosity parameter.

If verbosity=1, then getblock.Tx is an array of a block's transaction ids (txids) as strings.
If verbosity=2, then getblock.Tx is an array of raw transaction data.

Due to differences in how getblock returns data based on the provided verbosity parameter, it's necessary
to have two separate return types based on verbosity. This necessitates a separate unmarshalling function
(represented throughout rpcclient/chain.go as Result.Receive()) to ensure that data is correctly unmarshalled
and returned to the user.
2020-03-09 14:47:11 -04:00
Daniel McNally
fd0921b9b4 btcjson: add RPC_IN_WARMUP error code
This adds an error code for the `RPC_IN_WARMUP` error code defined at
https://github.com/bitcoin/bitcoin/blob/master/src/rpc/protocol.h#L49
which is thrown when bitcoind has started but has not yet finished
verifying recent blocks and being ready for rpc calls.
2020-03-09 13:43:01 -04:00
Jin
96f3808dc9 BUG:dynamicbanscore deadlock 2020-03-09 13:41:13 -04:00
Steven Roose
9e94ccbd0e server: Fix incorrect log message format 2020-03-05 17:00:45 -05:00
Tyler Chambers
1d0bfca5b0 fix error message 2020-03-05 16:51:41 -05:00
John C. Vernaleo
e9f15eda7e
rpcclient: Add net params to Client (#1467)
* rpcclient: replace futures mainnet with params

Adds a chaincfg.Params to the Client

rpcclient: parse config to assign params

* rpcclient: change address commands to Address

 * Change address future struct to contain a network field, so futures
   can return the correct type for Receive
2020-03-05 16:46:29 -05:00
jalavosus
a310aa6e74 All tests pass 2020-03-05 06:48:26 -05:00
jalavosus
57cb8e4b11 Refactor FutureGetBlockVerboseResult into two types: FutureGetBlockVerboseResult, and FutureGetBlockVerboseTxResult.
Due to differences in how getblock returns data based on the provided verbosity parameter, it's necessary
to have two separate return types based on verbosity. This necessitates a separate unmarshalling function
(represented throughout rpcclient/chain.go as Result.Receive()) to ensure that data is correctly unmarshalled
and returned to the user.
2020-03-05 06:48:19 -05:00
jalavosus
468154a052 Refactor GetBlockVerboseResult into two separate types: one type for getblock "hash" verbosity=1,
and a second type for getblock "hash" verbosity=2. This is necessary due to how getblock returns
a block's transaction data based on the provided verbosity parameter.

If verbosity=1, then getblock.Tx is an array of a block's transaction ids (txids) as strings.
If verbosity=2, then getblock.Tx is an array of raw transaction data.
2020-03-05 06:47:38 -05:00
jalavosus
160c388285 Refactor GetBlockCmd type and NewGetBlockCmd() function to follow the bitcoin json RPC verbosity format for getblock,
which uses 0, 1, or 2 as parameters rather than a boolean true or false.
2020-03-05 06:47:38 -05:00
Kulpreet Singh
06e5c43499 Add note about using gencerts when listening on specific interfaces 2020-03-04 10:10:03 -05:00
mohanson
e2c08cc80b docs/json_rpc_api: update go examples 2020-03-04 10:06:41 -05:00
qshuai
ef4cecf42b blockchain/indexers: Start a new line for long code 2020-03-04 09:42:00 -05:00
shuai.qi
46461dc84a btcjson, rpclient: Fix typo 2020-03-04 09:23:11 -05:00
Yash Bhutwala
318c89dfed fix comment of database.Tx to match code 2020-03-04 09:21:27 -05:00
Nisen
0c76fbd26f Fix comment error 2020-03-04 08:40:20 -05:00
George Tankersley
8bbbe98be9 peer: fix small typo 2020-03-04 08:38:26 -05:00
Henry Harder
1639d6c070 release: add missing back tick in build docs 2020-03-04 08:16:36 -05:00
Federico Bond
3eb4739b75 Fix minRelayTxFee name in comment 2020-03-03 15:45:34 -05:00
Jake Sylvestre
eed57cdcf1 go fmt 2020-03-03 15:34:38 -05:00
John C. Vernaleo
c01c98159b
Merge pull request #1537 from jakesyl/patch-1
Remove $GOPATH Caching
2020-03-03 10:08:31 -05:00
Jake
6799104157
Remove $GOPATH Caching
$GOPATH caching has led to flaky tests as per #1503 and #1536. The speedup is marginal and while the false negatives are a headache, false positives are potentially dangerous.
2020-02-10 14:16:11 -05:00
437 changed files with 52293 additions and 14406 deletions

32
.github/workflows/basic-check.yml vendored Normal file
View file

@ -0,0 +1,32 @@
name: Build and Test
on: [push, pull_request]
jobs:
build:
# https://github.blog/changelog/2021-04-20-github-actions-control-permissions-for-github_token/
permissions:
contents: read
name: Go CI
runs-on: ubuntu-latest
strategy:
matrix:
go: [1.19]
steps:
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go }}
- name: Check out source
uses: actions/checkout@v2
- name: Build
run: go build ./...
- name: Test
run: |
sh ./goclean.sh
- name: Send coverage
uses: shogo82148/actions-goveralls@v1
with:
path-to-profile: profile.cov

35
.github/workflows/full-sync-part-1.yml vendored Normal file
View file

@ -0,0 +1,35 @@
name: Full Sync From 0
on:
workflow_dispatch:
inputs:
note:
description: 'Note'
required: false
default: ''
jobs:
build:
name: Go CI
runs-on: self-hosted
strategy:
matrix:
go: [1.19]
steps:
- run: |
echo "Note ${{ github.event.inputs.note }}!"
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go }}
- name: Checkout source
uses: actions/checkout@v2
- name: Build lbcd
run: go build .
- name: Create datadir
run: echo "TEMP_DATA_DIR=$(mktemp -d)" >> $GITHUB_ENV
- name: Run lbcd
run: ./lbcd --datadir=${{env.TEMP_DATA_DIR}}/data --logdir=${{env.TEMP_DATA_DIR}}/logs --nolisten --norpc
- name: Remove datadir
if: always()
run: rm -rf ${{env.TEMP_DATA_DIR}}

37
.github/workflows/full-sync-part-2.yml vendored Normal file
View file

@ -0,0 +1,37 @@
name: Full Sync From 814k
on:
workflow_dispatch:
inputs:
note:
description: 'Note'
required: false
default: ''
jobs:
build:
name: Go CI
runs-on: self-hosted
strategy:
matrix:
go: [1.19]
steps:
- run: |
echo "Note ${{ github.event.inputs.note }}!"
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go }}
- name: Checkout source
uses: actions/checkout@v2
- name: Build lbcd
run: go build .
- name: Create datadir
run: echo "TEMP_DATA_DIR=$(mktemp -d)" >> $GITHUB_ENV
- name: Copy initial data
run: cp -r /home/lbry/lbcd_814k/* ${{env.TEMP_DATA_DIR}}
- name: Run lbcd
run: ./lbcd --datadir=${{env.TEMP_DATA_DIR}}/data --logdir=${{env.TEMP_DATA_DIR}}/logs --nolisten --norpc
- name: Remove datadir
if: always()
run: rm -rf ${{env.TEMP_DATA_DIR}}

57
.github/workflows/golangci-lint.yml vendored Normal file
View file

@ -0,0 +1,57 @@
name: golangci-lint
env:
# go needs absolute directories, using the $HOME variable doesn't work here.
GOCACHE: /home/runner/work/go/pkg/build
GOPATH: /home/runner/work/go
GO_VERSION: '^1.19'
on:
push:
tags:
- v*
branches:
- "*"
pull_request:
branches:
- "*"
jobs:
golangci:
name: lint
runs-on: ubuntu-latest
steps:
- name: setup go ${{ env.GO_VERSION }}
uses: actions/setup-go@v2
with:
go-version: '${{ env.GO_VERSION }}'
- name: checkout source
uses: actions/checkout@v2
- name: compile code
run: go install -v ./...
- name: golangci-lint
uses: golangci/golangci-lint-action@v2
with:
# Optional: version of golangci-lint to use in form of v1.2 or v1.2.3 or `latest` to use the latest version
version: latest
# Optional: working directory, useful for monorepos
# working-directory: somedir
# Optional: golangci-lint command line arguments.
# args: --issues-exit-code=0
# Optional: show only new issues if it's a pull request. The default value is `false`.
# only-new-issues: true
# Optional: if set to true then the action will use pre-installed Go.
skip-go-installation: true
# Optional: if set to true then the action don't cache or restore ~/go/pkg.
# skip-pkg-cache: true
# Optional: if set to true then the action don't cache or restore ~/.cache/go-build.
# skip-build-cache: true

57
.github/workflows/release.yml vendored Normal file
View file

@ -0,0 +1,57 @@
name: goreleaser
on:
workflow_dispatch:
inputs:
note:
description: 'Note'
required: false
default: ''
pull_request:
push:
tags:
- '*'
permissions:
contents: write
jobs:
goreleaser:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
-
name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.19
# Login against a Docker registry except on PR
# https://github.com/docker/login-action
- name: Log into registry docker.io
if: github.event_name != 'pull_request'
uses: docker/login-action@28218f9b04b4f3f62068d7b6ce6ca5b26e35336c
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
-
name: Run GoReleaser
uses: goreleaser/goreleaser-action@v2
with:
distribution: goreleaser
version: latest
args: release --rm-dist
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
-
name: Upload artifacts
uses: actions/upload-artifact@v2
with:
name: lbcd-${{ github.sha }}
path: |
dist/checksums.txt
dist/*.tar.gz

21
.gitignore vendored
View file

@ -3,6 +3,7 @@
# Databases
btcd.db
lbcd.db
*-shm
*-wal
@ -32,3 +33,23 @@ _cgo_export.*
_testmain.go
*.exe
.DS_Store
# Code coverage files
profile.tmp
profile.cov
# IDE
.idea
.vscode
# Binaries
btcd
btcctl
lbcd
lbcctl
# CI artifacts
dist
debug

152
.golangci.yml Normal file
View file

@ -0,0 +1,152 @@
linters-settings:
depguard:
list-type: blacklist
packages:
# logging is allowed only by logutils.Log, logrus
# is allowed to use only in logutils package
- github.com/sirupsen/logrus
packages-with-error-message:
- github.com/sirupsen/logrus: "logging is allowed only by logutils.Log"
dupl:
threshold: 100
funlen:
lines: 100
statements: 50
gci:
local-prefixes: github.com/golangci/golangci-lint
goconst:
min-len: 2
min-occurrences: 2
gocritic:
enabled-tags:
- diagnostic
- experimental
- opinionated
- performance
- style
disabled-checks:
- dupImport # https://github.com/go-critic/go-critic/issues/845
- ifElseChain
- octalLiteral
- whyNoLint
- wrapperFunc
gocyclo:
min-complexity: 15
goimports:
local-prefixes: github.com/golangci/golangci-lint
gomnd:
settings:
mnd:
# don't include the "operation" and "assign"
checks:
- argument
- case
- condition
- return
govet:
check-shadowing: true
settings:
printf:
funcs:
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Infof
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Warnf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Errorf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Fatalf
lll:
line-length: 140
maligned:
suggest-new: true
misspell:
locale: US
nolintlint:
allow-leading-space: true # don't require machine-readable nolint directives (i.e. with no leading space)
allow-unused: false # report any unused nolint directives
require-explanation: false # don't require an explanation for nolint directives
require-specific: false # don't require nolint directives to be specific about which linter is being skipped
linters:
disable-all: true
enable:
- asciicheck
- bodyclose
# - deadcode
- depguard
# - dogsled
# - dupl
# - errcheck
# - exhaustive
- exportloopref
# - funlen
# - gochecknoglobals
# - gochecknoinits
# - gocognit
# - goconst
# - gocritic
# - gocyclo
# - godot
# - godox
# - goerr113
- gofmt
- goimports
# - gomnd
- goprintffuncname
# - gosec
# - gosimple
# - govet
# - ineffassign
# - interfacer
# - lll
# - maligned
# - misspell
- nakedret
# - nestif
# - noctx
# - nolintlint
# - prealloc
- rowserrcheck
# - revive
# - scopelint
# - staticcheck
# - structcheck
# - stylecheck
# - testpackage
# - typecheck
- unconvert
# - unparam
# - unused
# - varcheck
# - whitespace
# - wsl
issues:
# Excluding configuration per-path, per-linter, per-text and per-source
exclude-rules:
- path: _test\.go
linters:
- gomnd
- path: pkg/golinters/errcheck.go
text: "SA1019: errCfg.Exclude is deprecated: use ExcludeFunctions instead"
- path: pkg/commands/run.go
text: "SA1019: lsc.Errcheck.Exclude is deprecated: use ExcludeFunctions instead"
# TODO must be removed after the release of the next version (v1.41.0)
- path: pkg/commands/run.go
linters:
- gomnd
# TODO must be removed after the release of the next version (v1.41.0)
- path: pkg/golinters/nolintlint/nolintlint.go
linters:
- gomnd
# TODO must be removed after the release of the next version (v1.41.0)
- path: pkg/printers/tab.go
linters:
- gomnd
run:
skip-dirs:
- test/testdata_etc
- internal/cache
- internal/renameio
- internal/robustio

68
.goreleaser.yml Normal file
View file

@ -0,0 +1,68 @@
# This is an example .goreleaser.yml file with some sensible defaults.
# Make sure to check the documentation at https://goreleaser.com
before:
hooks:
# You may remove this if you don't use go modules.
- go mod tidy
# you may remove this if you don't need go generate
- go generate ./...
builds:
-
main: .
id: "lbcd"
binary: "lbcd"
env:
- CGO_ENABLED=0
flags:
- -trimpath
ldflags:
- -s -w
- -buildid=
- -X github.com/lbryfoundation/lbcd/version.appTag={{ .Tag }}
targets:
- linux_amd64
- linux_arm64
- darwin_amd64
- darwin_arm64
- windows_amd64
mod_timestamp: '{{ .CommitTimestamp }}'
-
main: ./cmd/lbcctl
id: "lbcctl"
binary: "lbcctl"
flags:
- -trimpath
ldflags:
- -s -w
- -buildid=
- -X github.com/lbryfoundation/lbcd/version.appTag={{ .Tag }}
env:
- CGO_ENABLED=0
targets:
- linux_amd64
- linux_arm64
- darwin_amd64
- darwin_arm64
- windows_amd64
mod_timestamp: '{{ .CommitTimestamp }}'
checksum:
name_template: 'checksums.txt'
snapshot:
name_template: "{{ .Version }}+{{ .Commit }}"
changelog:
sort: asc
filters:
exclude:
- '^docs:'
- '^test:'
dockers:
- use: buildx
dockerfile: Dockerfile.goreleaser
image_templates:
- "docker.io/lbryfoundation/lbcd:{{ .Tag }}"
- "docker.io/lbryfoundation/lbcd:latest"
release:
draft: true
prerelease: auto

View file

@ -1,19 +0,0 @@
language: go
cache:
directories:
- $GOCACHE
- $GOPATH
- $GOPATH/pkg/mod
- $GOPATH/github.com/golang
- $GOPATH/gopkg.in/alecthomas
go:
- "1.13.x"
sudo: false
install:
- export PATH=$PATH:$PWD/linux-amd64/
- GO111MODULE=on go install . ./cmd/...
- GO111MODULE=off go get -u gopkg.in/alecthomas/gometalinter.v2
- GO111MODULE=off gometalinter.v2 --install
script:
- export PATH=$PATH:$HOME/gopath/bin
- ./goclean.sh

955
CHANGES
View file

@ -1,955 +0,0 @@
============================================================================
User visible changes for btcd
A full-node bitcoin implementation written in Go
============================================================================
Changes in 0.12.0 (Fri Nov 20 2015)
- Protocol and network related changes:
- Add a new checkpoint at block height 382320 (#555)
- Implement BIP0065 which includes support for version 4 blocks, a new
consensus opcode (OP_CHECKLOCKTIMEVERIFY) that enforces transaction
lock times, and a double-threshold switchover mechanism (#535, #459,
#455)
- Implement BIP0111 which provides a new bloom filter service flag and
hence provides support for protocol version 70011 (#499)
- Add a new parameter --nopeerbloomfilters to allow disabling bloom
filter support (#499)
- Reject non-canonically encoded variable length integers (#507)
- Add mainnet peer discovery DNS seed (seed.bitcoin.jonasschnelli.ch)
(#496)
- Correct reconnect handling for persistent peers (#463, #464)
- Ignore requests for block headers if not fully synced (#444)
- Add CLI support for specifying the zone id on IPv6 addresses (#538)
- Fix a couple of issues where the initial block sync could stall (#518,
#229, #486)
- Fix an issue which prevented the --onion option from working as
intended (#446)
- Transaction relay (memory pool) changes:
- Require transactions to only include signatures encoded with the
canonical 'low-s' encoding (#512)
- Add a new parameter --minrelaytxfee to allow the minimum transaction
fee in BTC/kB to be overridden (#520)
- Retain memory pool transactions when they redeem another one that is
removed when a block is accepted (#539)
- Do not send reject messages for a transaction if it is valid but
causes an orphan transaction which depends on it to be determined
as invalid (#546)
- Refrain from attempting to add orphans to the memory pool multiple
times when the transaction they redeem is added (#551)
- Modify minimum transaction fee calculations to scale based on bytes
instead of full kilobyte boundaries (#521, #537)
- Implement signature cache:
- Provides a limited memory cache of validated signatures which is a
huge optimization when verifying blocks for transactions that are
already in the memory pool (#506)
- Add a new parameter '--sigcachemaxsize' which allows the size of the
new cache to be manually changed if desired (#506)
- Mining support changes:
- Notify getblocktemplate long polling clients when a block is pushed
via submitblock (#488)
- Speed up getblocktemplate by making use of the new signature cache
(#506)
- RPC changes:
- Implement getmempoolinfo command (#453)
- Implement getblockheader command (#461)
- Modify createrawtransaction command to accept a new optional parameter
'locktime' (#529)
- Modify listunspent result to include the 'spendable' field (#440)
- Modify getinfo command to include 'errors' field (#511)
- Add timestamps to blockconnected and blockdisconnected notifications
(#450)
- Several modifications to searchrawtranscations command:
- Accept a new optional parameter 'vinextra' which causes the results
to include information about the outputs referenced by a transaction's
inputs (#485, #487)
- Skip entries in the mempool too (#495)
- Accept a new optional parameter 'reverse' to return the results in
reverse order (most recent to oldest) (#497)
- Accept a new optional parameter 'filteraddrs' which causes the
results to only include inputs and outputs which involve the
provided addresses (#516)
- Change the notification order to notify clients about mined
transactions (recvtx, redeemingtx) before the blockconnected
notification (#449)
- Update verifymessage RPC to use the standard algorithm so it is
compatible with other implementations (#515)
- Improve ping statistics by pinging on an interval (#517)
- Websocket changes:
- Implement session command which returns a per-session unique id (#500,
#503)
- btcctl utility changes:
- Add getmempoolinfo command (#453)
- Add getblockheader command (#461)
- Add getwalletinfo command (#471)
- Notable developer-related package changes:
- Introduce a new peer package which acts a common base for creating and
concurrently managing bitcoin network peers (#445)
- Various cleanup of the new peer package (#528, #531, #524, #534,
#549)
- Blocks heights now consistently use int32 everywhere (#481)
- The BlockHeader type in the wire package now provides the BtcDecode
and BtcEncode methods (#467)
- Update wire package to recognize BIP0064 (getutxo) service bit (#489)
- Export LockTimeThreshold constant from txscript package (#454)
- Export MaxDataCarrierSize constant from txscript package (#466)
- Provide new IsUnspendable function from the txscript package (#478)
- Export variable length string functions from the wire package (#514)
- Export DNS Seeds for each network from the chaincfg package (#544)
- Preliminary work towards separating the memory pool into a separate
package (#525, #548)
- Misc changes:
- Various documentation updates (#442, #462, #465, #460, #470, #473,
#505, #530, #545)
- Add installation instructions for gentoo (#542)
- Ensure an error is shown if OS limits can't be set at startup (#498)
- Tighten the standardness checks for multisig scripts (#526)
- Test coverage improvement (#468, #494, #527, #543, #550)
- Several optimizations (#457, #474, #475, #476, #508, #509)
- Minor code cleanup and refactoring (#472, #479, #482, #519, #540)
- Contributors (alphabetical order):
- Ben Echols
- Bruno Clermont
- danda
- Daniel Krawisz
- Dario Nieuwenhuis
- Dave Collins
- David Hill
- Javed Khan
- Jonathan Gillham
- Joseph Becher
- Josh Rickmar
- Justus Ranvier
- Mawuli Adzoe
- Olaoluwa Osuntokun
- Rune T. Aune
Changes in 0.11.1 (Wed May 27 2015)
- Protocol and network related changes:
- Use correct sub-command in reject message for rejected transactions
(#436, #437)
- Add a new parameter --torisolation which forces new circuits for each
connection when using tor (#430)
- Transaction relay (memory pool) changes:
- Reduce the default number max number of allowed orphan transactions
to 1000 (#419)
- Add a new parameter --maxorphantx which allows the maximum number of
orphan transactions stored in the mempool to be specified (#419)
- RPC changes:
- Modify listtransactions result to include the 'involveswatchonly' and
'vout' fields (#427)
- Update getrawtransaction result to omit the 'confirmations' field
when it is 0 (#420, #422)
- Update signrawtransaction result to include errors (#423)
- btcctl utility changes:
- Add gettxoutproof command (#428)
- Add verifytxoutproof command (#428)
- Notable developer-related package changes:
- The btcec package now provides the ability to perform ECDH
encryption and decryption (#375)
- The block and header validation in the blockchain package has been
split to help pave the way toward concurrent downloads (#386)
- Misc changes:
- Minor peer optimization (#433)
- Contributors (alphabetical order):
- Dave Collins
- David Hill
- Federico Bond
- Ishbir Singh
- Josh Rickmar
Changes in 0.11.0 (Wed May 06 2015)
- Protocol and network related changes:
- **IMPORTANT: Update is required due to the following point**
- Correct a few corner cases in script handling which could result in
forking from the network on non-standard transactions (#425)
- Add a new checkpoint at block height 352940 (#418)
- Optimized script execution (#395, #400, #404, #409)
- Fix a case that could lead stalled syncs (#138, #296)
- Network address manager changes:
- Implement eclipse attack countermeasures as proposed in
http://cs-people.bu.edu/heilman/eclipse (#370, #373)
- Optional address indexing changes:
- Fix an issue where a reorg could cause an orderly shutdown when the
address index is active (#340, #357)
- Transaction relay (memory pool) changes:
- Increase maximum allowed space for nulldata transactions to 80 bytes
(#331)
- Implement support for the following rules specified by BIP0062:
- The S value in ECDSA signature must be at most half the curve order
(rule 5) (#349)
- Script execution must result in a single non-zero value on the stack
(rule 6) (#347)
- NOTE: All 7 rules of BIP0062 are now implemented
- Use network adjusted time in finalized transaction checks to improve
consistency across nodes (#332)
- Process orphan transactions on acceptance of new transactions (#345)
- RPC changes:
- Add support for a limited RPC user which is not allowed admin level
operations on the server (#363)
- Implement node command for more unified control over connected peers
(#79, #341)
- Implement generate command for regtest/simnet to support
deterministically mining a specified number of blocks (#362, #407)
- Update searchrawtransactions to return the matching transactions in
order (#354)
- Correct an issue with searchrawtransactions where it could return
duplicates (#346, #354)
- Increase precision of 'difficulty' field in getblock result to 8
(#414, #415)
- Omit 'nextblockhash' field from getblock result when it is empty
(#416, #417)
- Add 'id' and 'timeoffset' fields to getpeerinfo result (#335)
- Websocket changes:
- Implement new commands stopnotifyspent, stopnotifyreceived,
stopnotifyblocks, and stopnotifynewtransactions to allow clients to
cancel notification registrations (#122, #342)
- btcctl utility changes:
- A single dash can now be used as an argument to cause that argument to
be read from stdin (#348)
- Add generate command
- Notable developer-related package changes:
- The new version 2 btcjson package has now replaced the deprecated
version 1 package (#368)
- The btcec package now performs all signing using RFC6979 deterministic
signatures (#358, #360)
- The txscript package has been significantly cleaned up and had a few
API changes (#387, #388, #389, #390, #391, #392, #393, #395, #396,
#400, #403, #404, #405, #406, #408, #409, #410, #412)
- A new PkScriptLocs function has been added to the wire package MsgTx
type which provides callers that deal with scripts optimization
opportunities (#343)
- Misc changes:
- Minor wire hashing optimizations (#366, #367)
- Other minor internal optimizations
- Contributors (alphabetical order):
- Alex Akselrod
- Arne Brutschy
- Chris Jepson
- Daniel Krawisz
- Dave Collins
- David Hill
- Jimmy Song
- Jonas Nick
- Josh Rickmar
- Olaoluwa Osuntokun
- Oleg Andreev
Changes in 0.10.0 (Sun Mar 01 2015)
- Protocol and network related changes:
- Add a new checkpoint at block height 343185
- Implement BIP066 which includes support for version 3 blocks, a new
consensus rule which prevents non-DER encoded signatures, and a
double-threshold switchover mechanism
- Rather than announcing all known addresses on getaddr requests which
can possibly result in multiple messages, randomize the results and
limit them to the max allowed by a single message (1000 addresses)
- Add more reserved IP spaces to the address manager
- Transaction relay (memory pool) changes:
- Make transactions which contain reserved opcodes nonstandard
- No longer accept or relay free and low-fee transactions that have
insufficient priority to be mined in the next block
- Implement support for the following rules specified by BIP0062:
- ECDSA signature must use strict DER encoding (rule 1)
- The signature script must only contain push operations (rule 2)
- All push operations must use the smallest possible encoding (rule 3)
- All stack values interpreted as a number must be encoding using the
shortest possible form (rule 4)
- NOTE: Rule 1 was already enforced, however the entire script now
evaluates to false rather than only the signature verification as
required by BIP0062
- Allow transactions with nulldata transaction outputs to be treated as
standard
- Mining support changes:
- Modify the getblocktemplate RPC to generate and return block templates
for version 3 blocks which are compatible with BIP0066
- Allow getblocktemplate to serve blocks when the current time is
less than the minimum allowed time for a generated block template
(https://github.com/btcsuite/btcd/issues/209)
- Crypto changes:
- Optimize scalar multiplication by the base point by using a
pre-computed table which results in approximately a 35% speedup
(https://github.com/btcsuite/btcec/issues/2)
- Optimize general scalar multiplication by using the secp256k1
endomorphism which results in approximately a 17-20% speedup
(https://github.com/btcsuite/btcec/issues/1)
- Optimize general scalar multiplication by using non-adjacent form
which results in approximately an additional 8% speedup
(https://github.com/btcsuite/btcec/issues/3)
- Implement optional address indexing:
- Add a new parameter --addrindex which will enable the creation of an
address index which can be queried to determine all transactions which
involve a given address
(https://github.com/btcsuite/btcd/issues/190)
- Add a new logging subsystem for address index related operations
- Support new searchrawtransactions RPC
(https://github.com/btcsuite/btcd/issues/185)
- RPC changes:
- Require TLS version 1.2 as the minimum version for all TLS connections
- Provide support for disabling TLS when only listening on localhost
(https://github.com/btcsuite/btcd/pull/192)
- Modify help output for all commands to provide much more consistent
and detailed information
- Correct case in getrawtransaction which would refuse to serve certain
transactions with invalid scripts
(https://github.com/btcsuite/btcd/issues/210)
- Correct error handling in the getrawtransaction RPC which could lead
to a crash in rare cases
(https://github.com/btcsuite/btcd/issues/196)
- Update getinfo RPC to include the appropriate 'timeoffset' calculated
from the median network time
- Modify listreceivedbyaddress result type to include txids field so it
is compatible
- Add 'iswatchonly' field to validateaddress result
- Add 'startingpriority' and 'currentpriority' fields to getrawmempool
(https://github.com/btcsuite/btcd/issues/178)
- Don't omit the 'confirmations' field from getrawtransaction when it is
zero
- Websocket changes:
- Modify the behavior of the rescan command to automatically register
for notifications about transactions paying to rescanned addresses
or spending outputs from the final rescan utxo set when the rescan
is through the best block in the chain
- btcctl utility changes:
- Make the list of commands available via the -l option rather than
dumping the entire list on usage errors
- Alphabetize and categorize the list of commands by chain and wallet
- Make the help option only show the help options instead of also
dumping all of the commands
- Make the usage syntax much more consistent and correct a few cases of
misnamed fields
(https://github.com/btcsuite/btcd/issues/305)
- Improve usage errors to show the specific parameter number, reason,
and error code
- Only show the usage for specific command is shown when a valid command
is provided with invalid parameters
- Add support for a SOCK5 proxy
- Modify output for integer fields (such as timestamps) to display
normally instead in scientific notation
- Add invalidateblock command
- Add reconsiderblock command
- Add createnewaccount command
- Add renameaccount command
- Add searchrawtransactions command
- Add importaddress command
- Add importpubkey command
- showblock utility changes:
- Remove utility in favor of the RPC getblock method
- Notable developer-related package changes:
- Many of the core packages have been relocated into the btcd repository
(https://github.com/btcsuite/btcd/issues/214)
- A new version of the btcjson package that has been completely
redesigned from the ground up based based upon how the project has
evolved and lessons learned while using it since it was first written
is now available in the btcjson/v2/btcjson directory
- This will ultimately replace the current version so anyone making
use of this package will need to update their code accordingly
- The btcec package now provides better facilities for working directly
with its public and private keys without having to mix elements from
the ecdsa package
- Update the script builder to ensure all rules specified by BIP0062 are
adhered to when creating scripts
- The blockchain package now provides a MedianTimeSource interface and
concrete implementation for providing time samples from remote peers
and using that data to calculate an offset against the local time
- Misc changes:
- Fix a slow memory leak due to tickers not being stopped
(https://github.com/btcsuite/btcd/issues/189)
- Fix an issue where a mix of orphans and SPV clients could trigger a
condition where peers would no longer be served
(https://github.com/btcsuite/btcd/issues/231)
- The RPC username and password can now contain symbols which previously
conflicted with special symbols used in URLs
- Improve handling of obtaining random nonces to prevent cases where it
could error when not enough entropy was available
- Improve handling of home directory creation errors such as in the case
of unmounted symlinks (https://github.com/btcsuite/btcd/issues/193)
- Improve the error reporting for rejected transactions to include the
inputs which are missing and/or being double spent
- Update sample config file with new options and correct a comment
regarding the fact the RPC server only listens on localhost by default
(https://github.com/btcsuite/btcd/issues/218)
- Update the continuous integration builds to run several tools which
help keep code quality high
- Significant amount of internal code cleanup and improvements
- Other minor internal optimizations
- Code Contributors (alphabetical order):
- Beldur
- Ben Holden-Crowther
- Dave Collins
- David Evans
- David Hill
- Guilherme Salgado
- Javed Khan
- Jimmy Song
- John C. Vernaleo
- Jonathan Gillham
- Josh Rickmar
- Michael Ford
- Michail Kargakis
- kac
- Olaoluwa Osuntokun
Changes in 0.9.0 (Sat Sep 20 2014)
- Protocol and network related changes:
- Add a new checkpoint at block height 319400
- Add support for BIP0037 bloom filters
(https://github.com/conformal/btcd/issues/132)
- Implement BIP0061 reject handling and hence support for protocol
version 70002 (https://github.com/conformal/btcd/issues/133)
- Add testnet DNS seeds for peer discovery (testnet-seed.alexykot.me
and testnet-seed.bitcoin.schildbach.de)
- Add mainnet DNS seed for peer discovery (seeds.bitcoin.open-nodes.org)
- Make multisig transactions with non-null dummy data nonstandard
(https://github.com/conformal/btcd/issues/131)
- Make transactions with an excessive number of signature operations
nonstandard
- Perform initial DNS lookups concurrently which allows connections
more quickly
- Improve the address manager to significantly reduce memory usage and
add tests
- Remove orphan transactions when they appear in a mined block
(https://github.com/conformal/btcd/issues/166)
- Apply incremental back off on connection retries for persistent peers
that give invalid replies to mirror the logic used for failed
connections (https://github.com/conformal/btcd/issues/103)
- Correct rate-limiting of free and low-fee transactions
- Mining support changes:
- Implement getblocktemplate RPC with the following support:
(https://github.com/conformal/btcd/issues/124)
- BIP0022 Non-Optional Sections
- BIP0022 Long Polling
- BIP0023 Basic Pool Extensions
- BIP0023 Mutation coinbase/append
- BIP0023 Mutations time, time/increment, and time/decrement
- BIP0023 Mutation transactions/add
- BIP0023 Mutations prevblock, coinbase, and generation
- BIP0023 Block Proposals
- Implement built-in concurrent CPU miner
(https://github.com/conformal/btcd/issues/137)
NOTE: CPU mining on mainnet is pointless. This has been provided
for testing purposes such as for the new simulation test network
- Add --generate flag to enable CPU mining
- Deprecate the --getworkkey flag in favor of --miningaddr which
specifies which addresses generated blocks will choose from to pay
the subsidy to
- RPC changes:
- Implement gettxout command
(https://github.com/conformal/btcd/issues/141)
- Implement validateaddress command
- Implement verifymessage command
- Mark getunconfirmedbalance RPC as wallet-only
- Mark getwalletinfo RPC as wallet-only
- Update getgenerate, setgenerate, gethashespersec, and getmininginfo
to return the appropriate information about new CPU mining status
- Modify getpeerinfo pingtime and pingwait field types to float64 so
they are compatible
- Improve disconnect handling for normal HTTP clients
- Make error code returns for invalid hex more consistent
- Websocket changes:
- Switch to a new more efficient websocket package
(https://github.com/conformal/btcd/issues/134)
- Add rescanfinished notification
- Modify the rescanprogress notification to include block hash as well
as height (https://github.com/conformal/btcd/issues/151)
- btcctl utility changes:
- Accept --simnet flag which automatically selects the appropriate port
and TLS certificates needed to communicate with btcd and btcwallet on
the simulation test network
- Fix createrawtransaction command to send amounts denominated in BTC
- Add estimatefee command
- Add estimatepriority command
- Add getmininginfo command
- Add getnetworkinfo command
- Add gettxout command
- Add lockunspent command
- Add signrawtransaction command
- addblock utility changes:
- Accept --simnet flag which automatically selects the appropriate port
and TLS certificates needed to communicate with btcd and btcwallet on
the simulation test network
- Notable developer-related package changes:
- Provide a new bloom package in btcutil which allows creating and
working with BIP0037 bloom filters
- Provide a new hdkeychain package in btcutil which allows working with
BIP0032 hierarchical deterministic key chains
- Introduce a new btcnet package which houses network parameters
- Provide new simnet network (--simnet) which is useful for private
simulation testing
- Enforce low S values in serialized signatures as detailed in BIP0062
- Return errors from all methods on the btcdb.Db interface
(https://github.com/conformal/btcdb/issues/5)
- Allow behavior flags to alter btcchain.ProcessBlock
(https://github.com/conformal/btcchain/issues/5)
- Provide a new SerializeSize API for blocks
(https://github.com/conformal/btcwire/issues/19)
- Several of the core packages now work with Google App Engine
- Misc changes:
- Correct an issue where the database could corrupt under certain
circumstances which would require a new chain download
- Slightly optimize deserialization
- Use the correct IP block for he.net
- Fix an issue where it was possible the block manager could hang on
shutdown
- Update sample config file so the comments are on a separate line
rather than the end of a line so they are not interpreted as settings
(https://github.com/conformal/btcd/issues/135)
- Correct an issue where getdata requests were not being properly
throttled which could lead to larger than necessary memory usage
- Always show help when given the help flag even when the config file
contains invalid entries
- General code cleanup and minor optimizations
Changes in 0.8.0-beta (Sun May 25 2014)
- Btcd is now Beta (https://github.com/conformal/btcd/issues/130)
- Add a new checkpoint at block height 300255
- Protocol and network related changes:
- Lower the minimum transaction relay fee to 1000 satoshi to match
recent reference client changes
(https://github.com/conformal/btcd/issues/100)
- Raise the maximum signature script size to support standard 15-of-15
multi-signature pay-to-sript-hash transactions with compressed pubkeys
to remain compatible with the reference client
(https://github.com/conformal/btcd/issues/128)
- Reduce max bytes allowed for a standard nulldata transaction to 40 for
compatibility with the reference client
- Introduce a new btcnet package which houses all of the network params
for each network (mainnet, testnet3, regtest) to ultimately enable
easier addition and tweaking of networks without needing to change
several packages
- Fix several script discrepancies found by reference client test data
- Add new DNS seed for peer discovery (seed.bitnodes.io)
- Reduce the max known inventory cache from 20000 items to 1000 items
- Fix an issue where unknown inventory types could lead to a hung peer
- Implement inventory rebroadcast handler for sendrawtransaction
(https://github.com/conformal/btcd/issues/99)
- Update user agent to fully support BIP0014
(https://github.com/conformal/btcwire/issues/10)
- Implement initial mining support:
- Add a new logging subsystem for mining related operations
- Implement infrastructure for creating block templates
- Provide options to control block template creation settings
- Support the getwork RPC
- Allow address identifiers to apply to more than one network since both
testnet3 and the regression test network unfortunately use the same
identifier
- RPC changes:
- Set the content type for HTTP POST RPC connections to application/json
(https://github.com/conformal/btcd/issues/121)
- Modified the RPC server startup so it only requires at least one valid
listen interface
- Correct an error path where it was possible certain errors would not
be returned
- Implement getwork command
(https://github.com/conformal/btcd/issues/125)
- Update sendrawtransaction command to reject orphans
- Update sendrawtransaction command to include the reason a transaction
was rejected
- Update getinfo command to populate connection count field
- Update getinfo command to include relay fee field
(https://github.com/conformal/btcd/issues/107)
- Allow transactions submitted with sendrawtransaction to bypass the
rate limiter
- Allow the getcurrentnet and getbestblock extensions to be accessed via
HTTP POST in addition to Websockets
(https://github.com/conformal/btcd/issues/127)
- Websocket changes:
- Rework notifications to ensure they are delivered in the order they
occur
- Rename notifynewtxs command to notifyreceived (funds received)
- Rename notifyallnewtxs command to notifynewtransactions
- Rename alltx notification to txaccepted
- Rename allverbosetx notification to txacceptedverbose
(https://github.com/conformal/btcd/issues/98)
- Add rescan progress notification
- Add recvtx notification
- Add redeemingtx notification
- Modify notifyspent command to accept an array of outpoints
(https://github.com/conformal/btcd/issues/123)
- Significantly optimize the rescan command to yield up to a 60x speed
increase
- btcctl utility changes:
- Add createencryptedwallet command
- Add getblockchaininfo command
- Add importwallet command
- Add addmultisigaddress command
- Add setgenerate command
- Accept --testnet and --wallet flags which automatically select
the appropriate port and TLS certificates needed to communicate
with btcd and btcwallet (https://github.com/conformal/btcd/issues/112)
- Allow path expansion from config file entries
(https://github.com/conformal/btcd/issues/113)
- Minor refactor simplify handling of options
- addblock utility changes:
- Improve logging by making it consistent with the logging provided by
btcd (https://github.com/conformal/btcd/issues/90)
- Improve several package APIs for developers:
- Add new amount type for consistently handling monetary values
- Add new coin selector API
- Add new WIF (Wallet Import Format) API
- Add new crypto types for private keys and signatures
- Add new API to sign transactions including script merging and hash
types
- Expose function to extract all pushed data from a script
(https://github.com/conformal/btcscript/issues/8)
- Misc changes:
- Optimize address manager shuffling to do 67% less work on average
- Resolve a couple of benign data races found by the race detector
(https://github.com/conformal/btcd/issues/101)
- Add IP address to all peer related errors to clarify which peer is the
cause (https://github.com/conformal/btcd/issues/102)
- Fix a UPNP case issue that prevented the --upnp option from working
with some UPNP servers
- Update documentation in the sample config file regarding debug levels
- Adjust some logging levels to improve debug messages
- Improve the throughput of query messages to the block manager
- Several minor optimizations to reduce GC churn and enhance speed
- Other minor refactoring
- General code cleanup
Changes in 0.7.0 (Thu Feb 20 2014)
- Fix an issue when parsing scripts which contain a multi-signature script
which require zero signatures such as testnet block
000000001881dccfeda317393c261f76d09e399e15e27d280e5368420f442632
(https://github.com/conformal/btcscript/issues/7)
- Add check to ensure all transactions accepted to mempool only contain
canonical data pushes (https://github.com/conformal/btcscript/issues/6)
- Fix an issue causing excessive memory consumption
- Significantly rework and improve the websocket notification system:
- Each client is now independent so slow clients no longer limit the
speed of other connected clients
- Potentially long-running operations such as rescans are now run in
their own handler and rate-limited to one operation at a time without
preventing simultaneous requests from the same client for the faster
requests or notifications
- A couple of scenarios which could cause shutdown to hang have been
resolved
- Update notifynewtx notifications to support all address types instead
of only pay-to-pubkey-hash
- Provide a --rpcmaxwebsockets option to allow limiting the number of
concurrent websocket clients
- Add a new websocket command notifyallnewtxs to request notifications
(https://github.com/conformal/btcd/issues/86) (thanks @flammit)
- Improve btcctl utility in the following ways:
- Add getnetworkhashps command
- Add gettransaction command (wallet-specific)
- Add signmessage command (wallet-specific)
- Update getwork command to accept
- Continue cleanup and work on implementing the RPC API:
- Implement getnettotals command
(https://github.com/conformal/btcd/issues/84)
- Implement networkhashps command
(https://github.com/conformal/btcd/issues/87)
- Update getpeerinfo to always include syncnode field even when false
- Remove help addenda for getpeerinfo now that it supports all fields
- Close standard RPC connections on auth failure
- Provide a --rpcmaxclients option to allow limiting the number of
concurrent RPC clients (https://github.com/conformal/btcd/issues/68)
- Include IP address in RPC auth failure log messages
- Resolve a rather harmless data races found by the race detector
(https://github.com/conformal/btcd/issues/94)
- Increase block priority size and max standard transaction size to 50k
and 100k, respectively (https://github.com/conformal/btcd/issues/71)
- Add rate limiting of free transactions to the memory pool to prevent
penny flooding (https://github.com/conformal/btcd/issues/40)
- Provide a --logdir option (https://github.com/conformal/btcd/issues/95)
- Change the default log file path to include the network
- Add a new ScriptBuilder interface to btcscript to support creation of
custom scripts (https://github.com/conformal/btcscript/issues/5)
- General code cleanup
Changes in 0.6.0 (Tue Feb 04 2014)
- Fix an issue when parsing scripts which contain invalid signatures that
caused a chain fork on block
0000000000000001e4241fd0b3469a713f41c5682605451c05d3033288fb2244
- Correct an issue which could lead to an error in removeBlockNode
(https://github.com/conformal/btcchain/issues/4)
- Improve addblock utility as follows:
- Check imported blocks against all chain rules and checkpoints
- Skip blocks which are already known so you can stop and restart the
import or start the import after you have already downloaded a portion
of the chain
- Correct an issue where the utility did not shutdown cleanly after
processing all blocks
- Add error on attempt to import orphan blocks
- Improve error handling and reporting
- Display statistics after input file has been fully processed
- Rework, optimize, and improve headers-first mode:
- Resuming the chain sync from any point before the final checkpoint
will now use headers-first mode
(https://github.com/conformal/btcd/issues/69)
- Verify all checkpoints as opposed to only the final one
- Reduce and bound memory usage
- Rollback to the last known good point when a header does not match a
checkpoint
- Log information about what is happening with headers
- Improve btcctl utility in the following ways:
- Add getaddednodeinfo command
- Add getnettotals command
- Add getblocktemplate command (wallet-specific)
- Add getwork command (wallet-specific)
- Add getnewaddress command (wallet-specific)
- Add walletpassphrasechange command (wallet-specific)
- Add walletlock command (wallet-specific)
- Add sendfrom command (wallet-specific)
- Add sendmany command (wallet-specific)
- Add settxfee command (wallet-specific)
- Add listsinceblock command (wallet-specific)
- Add listaccounts command (wallet-specific)
- Add keypoolrefill command (wallet-specific)
- Add getreceivedbyaccount command (wallet-specific)
- Add getrawchangeaddress command (wallet-specific)
- Add gettxoutsetinfo command (wallet-specific)
- Add listaddressgroupings command (wallet-specific)
- Add listlockunspent command (wallet-specific)
- Add listlock command (wallet-specific)
- Add listreceivedbyaccount command (wallet-specific)
- Add validateaddress command (wallet-specific)
- Add verifymessage command (wallet-specific)
- Add sendtoaddress command (wallet-specific)
- Continue cleanup and work on implementing the RPC API:
- Implement submitblock command
(https://github.com/conformal/btcd/issues/61)
- Implement help command
- Implement ping command
- Implement getaddednodeinfo command
(https://github.com/conformal/btcd/issues/78)
- Implement getinfo command
- Update getpeerinfo to support bytesrecv and bytessent
(https://github.com/conformal/btcd/issues/83)
- Improve and correct several RPC server and websocket areas:
- Change the connection endpoint for websockets from /wallet to /ws
(https://github.com/conformal/btcd/issues/80)
- Implement an alternative authentication for websockets so clients
such as javascript from browsers that don't support setting HTTP
headers can authenticate (https://github.com/conformal/btcd/issues/77)
- Add an authentication deadline for RPC connections
(https://github.com/conformal/btcd/issues/68)
- Use standard authentication failure responses for RPC connections
- Make automatically generated certificate more standard so it works
from client such as node.js and Firefox
- Correct some minor issues which could prevent the RPC server from
shutting down in an orderly fashion
- Make all websocket notifications require registration
- Change the data sent over websockets to text since it is JSON-RPC
- Allow connections that do not have an Origin header set
- Expose and track the number of bytes read and written per peer
(https://github.com/conformal/btcwire/issues/6)
- Correct an issue with sendrawtransaction when invoked via websockets
which prevented a minedtx notification from being added
- Rescan operations issued from remote wallets are no stopped when
the wallet disconnects mid-operation
(https://github.com/conformal/btcd/issues/66)
- Several optimizations related to fetching block information from the
database
- General code cleanup
Changes in 0.5.0 (Mon Jan 13 2014)
- Optimize initial block download by introducing a new mode which
downloads the block headers first (up to the final checkpoint)
- Improve peer handling to remove the potential for slow peers to cause
sluggishness amongst all peers
(https://github.com/conformal/btcd/issues/63)
- Fix an issue where the initial block sync could stall when the sync peer
disconnects (https://github.com/conformal/btcd/issues/62)
- Correct an issue where --externalip was doing a DNS lookup on the full
host:port instead of just the host portion
(https://github.com/conformal/btcd/issues/38)
- Fix an issue which could lead to a panic on chain switches
(https://github.com/conformal/btcd/issues/70)
- Improve btcctl utility in the following ways:
- Show getdifficulty output as floating point to 6 digits of precision
- Show all JSON object replies formatted as standard JSON
- Allow btcctl getblock to accept optional params
- Add getaccount command (wallet-specific)
- Add getaccountaddress command (wallet-specific)
- Add sendrawtransaction command
- Continue cleanup and work on implementing RPC API calls
- Update getrawmempool to support new optional verbose flag
- Update getrawtransaction to match the reference client
- Update getblock to support new optional verbose flag
- Update raw transactions to fully match the reference client including
support for all transaction types and address types
- Correct getrawmempool fee field to return BTC instead of Satoshi
- Correct getpeerinfo service flag to return 8 digit string so it
matches the reference client
- Correct verifychain to return a boolean
- Implement decoderawtransaction command
- Implement createrawtransaction command
- Implement decodescript command
- Implement gethashespersec command
- Allow RPC handler overrides when invoked via a websocket versus
legacy connection
- Add new DNS seed for peer discovery
- Display user agent on new valid peer log message
(https://github.com/conformal/btcd/issues/64)
- Notify wallet when new transactions that pay to registered addresses
show up in the mempool before being mined into a block
- Support a tor-specific proxy in addition to a normal proxy
(https://github.com/conformal/btcd/issues/47)
- Remove deprecated sqlite3 imports from utilities
- Remove leftover profile write from addblock utility
- Quite a bit of code cleanup and refactoring to improve maintainability
Changes in 0.4.0 (Thu Dec 12 2013)
- Allow listen interfaces to be specified via --listen instead of only the
port (https://github.com/conformal/btcd/issues/33)
- Allow listen interfaces for the RPC server to be specified via
--rpclisten instead of only the port
(https://github.com/conformal/btcd/issues/34)
- Only disable listening when --connect or --proxy are used when no
--listen interface are specified
(https://github.com/conformal/btcd/issues/10)
- Add several new standard transaction checks to transaction memory pool:
- Support nulldata scripts as standard
- Only allow a max of one nulldata output per transaction
- Enforce a maximum of 3 public keys in multi-signature transactions
- The number of signatures in multi-signature transactions must not
exceed the number of public keys
- The number of inputs to a signature script must match the expected
number of inputs for the script type
- The number of inputs pushed onto the stack by a redeeming signature
script must match the number of inputs consumed by the referenced
public key script
- When a block is connected, remove any transactions from the memory pool
which are now double spends as a result of the newly connected
transactions
- Don't relay transactions resurrected during a chain switch since
other peers will also be switching chains and therefore already know
about them
- Cleanup a few cases where rejected transactions showed as an error
rather than as a rejected transaction
- Ignore the default configuration file when --regtest (regression test
mode) is specified
- Implement TLS support for RPC including automatic certificate generation
- Support HTTP authentication headers for web sockets
- Update address manager to recognize and properly work with Tor
addresses (https://github.com/conformal/btcd/issues/36) and
(https://github.com/conformal/btcd/issues/37)
- Improve btcctl utility in the following ways:
- Add the ability to specify a configuration file
- Add a default entry for the RPC cert to point to the location
it will likely be in the btcd home directory
- Implement --version flag
- Provide a --notls option to support non-TLS configurations
- Fix a couple of minor races found by the Go race detector
- Improve logging
- Allow logging level to be specified on a per subsystem basis
(https://github.com/conformal/btcd/issues/48)
- Allow logging levels to be dynamically changed via RPC
(https://github.com/conformal/btcd/issues/15)
- Implement a rolling log file with a max of 10MB per file and a
rotation size of 3 which results in a max logging size of 30 MB
- Correct a minor issue with the rescanning websocket call
(https://github.com/conformal/btcd/issues/54)
- Fix a race with pushing address messages that could lead to a panic
(https://github.com/conformal/btcd/issues/58)
- Improve which external IP address is reported to peers based on which
interface they are connected through
(https://github.com/conformal/btcd/issues/35)
- Add --externalip option to allow an external IP address to be specified
for cases such as tor hidden services or advanced network configurations
(https://github.com/conformal/btcd/issues/38)
- Add --upnp option to support automatic port mapping via UPnP
(https://github.com/conformal/btcd/issues/51)
- Update Ctrl+C interrupt handler to properly sync address manager and
remove the UPnP port mapping (if needed)
- Continue cleanup and work on implementing RPC API calls
- Add importprivkey (import private key) command to btcctl
- Update getrawtransaction to provide addresses properly, support
new verbose param, and match the reference implementation with the
exception of MULTISIG (thanks @flammit)
- Update getblock with new verbose flag (thanks @flammit)
- Add listtransactions command to btcctl
- Add getbalance command to btcctl
- Add basic support for btcd to run as a native Windows service
(https://github.com/conformal/btcd/issues/42)
- Package addblock utility with Windows MSIs
- Add support for TravisCI (continuous build integration)
- Cleanup some documentation and usage
- Several other minor bug fixes and general code cleanup
Changes in 0.3.3 (Wed Nov 13 2013)
- Significantly improve initial block chain download speed
(https://github.com/conformal/btcd/issues/20)
- Add a new checkpoint at block height 267300
- Optimize most recently used inventory handling
(https://github.com/conformal/btcd/issues/21)
- Optimize duplicate transaction input check
(https://github.com/conformal/btcchain/issues/2)
- Optimize transaction hashing
(https://github.com/conformal/btcd/issues/25)
- Rework and optimize wallet listener notifications
(https://github.com/conformal/btcd/issues/22)
- Optimize serialization and deserialization
(https://github.com/conformal/btcd/issues/27)
- Add support for minimum transaction fee to memory pool acceptance
(https://github.com/conformal/btcd/issues/29)
- Improve leveldb database performance by removing explicit GC call
- Fix an issue where Ctrl+C was not always finishing orderly database
shutdown
- Fix an issue in the script handling for OP_CHECKSIG
- Impose max limits on all variable length protocol entries to prevent
abuse from malicious peers
- Enforce DER signatures for transactions allowed into the memory pool
- Separate the debug profile http server from the RPC server
- Rework of the RPC code to improve performance and make the code cleaner
- The getrawtransaction RPC call now properly checks the memory pool
before consulting the db (https://github.com/conformal/btcd/issues/26)
- Add support for the following RPC calls: getpeerinfo, getconnectedcount,
addnode, verifychain
(https://github.com/conformal/btcd/issues/13)
(https://github.com/conformal/btcd/issues/17)
- Implement rescan websocket extension to allow wallet rescans
- Use correct paths for application data storage for all supported
operating systems (https://github.com/conformal/btcd/issues/30)
- Add a default redirect to the http profiling page when accessing the
http profile server
- Add a new --cpuprofile option which can be used to generate CPU
profiling data on platforms that support it
- Several other minor performance optimizations
- Other minor bug fixes and general code cleanup
Changes in 0.3.2 (Tue Oct 22 2013)
- Fix an issue that could cause the download of the block chain to stall
(https://github.com/conformal/btcd/issues/12)
- Remove deprecated sqlite as an available database backend
- Close sqlite compile issue as sqlite has now been removed
(https://github.com/conformal/btcd/issues/11)
- Change default RPC ports to 8334 (mainnet) and 18334 (testnet)
- Continue cleanup and work on implementing RPC API calls
- Add support for the following RPC calls: getrawmempool,
getbestblockhash, decoderawtransaction, getdifficulty,
getconnectioncount, getpeerinfo, and addnode
- Improve the btcctl utility that is used to issue JSON-RPC commands
- Fix an issue preventing btcd from cleanly shutting down with the RPC
stop command
- Add a number of database interface tests to ensure backends implement
the expected interface
- Expose some additional information from btcscript to be used for
identifying "standard"" transactions
- Add support for plan9 - thanks @mischief
(https://github.com/conformal/btcd/pull/19)
- Other minor bug fixes and general code cleanup
Changes in 0.3.1-alpha (Tue Oct 15 2013)
- Change default database to leveldb
NOTE: This does mean you will have to redownload the block chain. Since we
are still in alpha, we didn't feel writing a converter was worth the time as
it would take away from more important issues at this stage
- Add a warning if there are multiple block chain databases of different types
- Fix issue with unexpected EOF in leveldb -- https://github.com/conformal/btcd/issues/18
- Fix issue preventing block 21066 on testnet -- https://github.com/conformal/btcchain/issues/1
- Fix issue preventing block 96464 on testnet -- https://github.com/conformal/btcscript/issues/1
- Optimize transaction lookups
- Correct a few cases of list removal that could result in improper cleanup
of no longer needed orphans
- Add functionality to increase ulimits on non-Windows platforms
- Add support for mempool command which allows remote peers to query the
transaction memory pool via the bitcoin protocol
- Clean up logging a bit
- Add a flag to disable checkpoints for developers
- Add a lot of useful debug logging such as message summaries
- Other minor bug fixes and general code cleanup
Initial Release 0.3.0-alpha (Sat Oct 05 2013):
- Initial release

40
Dockerfile Normal file
View file

@ -0,0 +1,40 @@
# This Dockerfile builds lbcd from source and creates a small (55 MB) docker container based on alpine linux.
#
# Clone this repository and run the following command to build and tag a fresh lbcd amd64 container:
#
# docker build . -t yourregistry/lbcd
#
# You can use the following command to buid an arm64v8 container:
#
# docker build . -t yourregistry/lbcd --build-arg ARCH=arm64v8
#
# For more information how to use this docker image visit:
# https://github.com/lbryio/lbcd/tree/master/docs
#
# 9246 Mainnet LBRY peer-to-peer port
# 9245 Mainet RPC port
ARG ARCH=amd64
FROM golang:1.19 AS build-container
ARG ARCH
ADD . /app
WORKDIR /app
RUN set -ex \
&& if [ "${ARCH}" = "amd64" ]; then export GOARCH=amd64; fi \
&& if [ "${ARCH}" = "arm32v7" ]; then export GOARCH=arm; fi \
&& if [ "${ARCH}" = "arm64v8" ]; then export GOARCH=arm64; fi \
&& echo "Compiling for $GOARCH" \
&& go install -v . ./cmd/...
FROM $ARCH/debian:bullseye-20220418-slim
COPY --from=build-container /go/bin /bin
VOLUME ["/root/.lbcd"]
EXPOSE 9245 9246
ENTRYPOINT ["lbcd"]

9
Dockerfile.goreleaser Normal file
View file

@ -0,0 +1,9 @@
FROM debian:bullseye-20220418-slim
COPY lbcd lbcctl /bin/
VOLUME ["/root/.lbcd"]
EXPOSE 9245 9246
ENTRYPOINT ["lbcd"]

View file

@ -1,5 +1,6 @@
ISC License
Copyright (c) 2021 The LBRY developers
Copyright (c) 2013-2017 The btcsuite developers
Copyright (c) 2015-2016 The Decred developers

369
README.md
View file

@ -1,130 +1,335 @@
btcd
====
# lbcd
[![Build Status](https://travis-ci.org/btcsuite/btcd.png?branch=master)](https://travis-ci.org/btcsuite/btcd)
[![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](http://godoc.org/github.com/btcsuite/btcd)
[![Build Status](https://github.com/lbryio/lbcd/workflows/Build%20and%20Test/badge.svg)](https://github.com/lbryio/lbcd/actions)
[![Coverage Status](https://coveralls.io/repos/github/lbryio/lbcd/badge.svg?branch=master)](https://coveralls.io/github/lbryio/lbcd?branch=master)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
<!--[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://pkg.go.dev/github.com/lbryio/lbcd)-->
btcd is an alternative full node bitcoin implementation written in Go (golang).
**lbcd** is a full node implementation of LBRY's blockchain written in Go (golang).
This project is currently under active development and is in a Beta state. It
is extremely stable and has been in production use since October 2013.
Software stack developed by LBRY teams has been all migrated to **lbcd**.
It properly downloads, validates, and serves the block chain using the exact
rules (including consensus bugs) for block acceptance as Bitcoin Core. We have
taken great care to avoid btcd causing a fork to the block chain. It includes a
full block validation testing framework which contains all of the 'official'
block acceptance tests (and some additional ones) that is run on every pull
request to help ensure it properly follows consensus. Also, it passes all of
the JSON test data in the Bitcoin Core code.
We're working with exchanges and pool oerators to migrate from **lbrycrd** to **lbcd**.
It also properly relays newly mined blocks, maintains a transaction pool, and
relays individual transactions that have not yet made it into a block. It
ensures all individual transactions admitted to the pool follow the rules
required by the block chain and also includes more strict checks which filter
transactions based on miner requirements ("standard" transactions).
If you're integrating with **lbcd+lbcwallet**, please check the Wiki for current [supported RPCs](wiki/RPC-availability).
One key difference between btcd and Bitcoin Core is that btcd does *NOT* include
wallet functionality and this was a very intentional design decision. See the
blog entry [here](https://blog.conformal.com/btcd-not-your-moms-bitcoin-daemon)
for more details. This means you can't actually make or receive payments
directly with btcd. That functionality is provided by the
[btcwallet](https://github.com/btcsuite/btcwallet) and
[Paymetheus](https://github.com/btcsuite/Paymetheus) (Windows-only) projects
which are both under active development.
Note: **lbcd** does *NOT* include wallet functionality. That functionality is provided by the
[lbcwallet](https://github.com/lbryio/lbcwallet) and the [LBRY SDK](https://github.com/lbryio/lbry-sdk).
## Requirements
[Go](http://golang.org) 1.12 or newer.
All common operating systems are supported. lbcd requires at least 8GB of RAM
and at least 100GB of disk storage. Both RAM and disk requirements increase slowly over time.
Using a fast NVMe disk is recommended.
## Installation
#### Windows - MSI Available
Acquire binary files from [releases](https://github.com/lbryio/lbcd/releases)
https://github.com/btcsuite/btcd/releases
For compilation, [Go](http://golang.org) 1.19 or newer is required.
Install Go according to its [installation instructions](http://golang.org/doc/install).
#### Linux/BSD/MacOSX/POSIX - Build from Source
``` sh
# lbcd (full node)
$ go install github.com/lbryio/lbcd@latest
- Install Go according to the installation instructions here:
http://golang.org/doc/install
- Ensure Go was installed properly and is a supported version:
```bash
$ go version
$ go env GOROOT GOPATH
# lbcctl (rpc client utility)
$ go install github.com/lbryio/lbcd/cmd/lbcctl@latest
```
NOTE: The `GOROOT` and `GOPATH` above must not be the same path. It is
recommended that `GOPATH` is set to a directory in your home directory such as
`~/goprojects` to avoid write permission issues. It is also recommended to add
`$GOPATH/bin` to your `PATH` at this point.
## Usage
- Run the following commands to obtain btcd, all dependencies, and install it:
Default application folder `${LBCDDIR}`:
```bash
$ cd $GOPATH/src/github.com/btcsuite/btcd
$ GO111MODULE=on go install -v . ./cmd/...
- Linux: `~/.lbcd/`
- MacOS: `/Users/<username>/Library/Application Support/Lbcd/`
### Start the **lbcd**
``` sh
./lbcd
```
- btcd (and utilities) will now be installed in ```$GOPATH/bin```. If you did
not already add the bin directory to your system path during Go installation,
we recommend you do so now.
**lbcd** loads config file at `"${LBCDDIR}/lbcd.conf"`.
## Updating
If no config is found, it creates a [default one](sample-lbcd.conf), which includes all available options with default settings except randomly generated *RPC credentials* (see below).
#### Windows
### RPC server
Install a newer MSI
RPC credentials (`rpcuser` and `rpcpass`) is required to enable RPC server. It can be specify in the `"${LBCDDIR}/lbcd.conf"`, using command line options:
#### Linux/BSD/MacOSX/POSIX - Build from Source
``` sh
./lbcd --rpcuser=rpcuser --rpcpass=rpcpass
- Run the following commands to update btcd, all dependencies, and install it:
```bash
$ cd $GOPATH/src/github.com/btcsuite/btcd
$ git pull
$ GO111MODULE=on go install -v . ./cmd/...
2022-07-28 12:28:19.627 [INF] RPCS: RPC server listening on 0.0.0.0:9245
2022-07-28 12:28:19.627 [INF] RPCS: RPC server listening on [::]:9245
```
## Getting Started
### Working with TLS (Default)
btcd has several configuration options available to tweak how it runs, but all
of the basic operations described in the intro section work with zero
configuration.
By default, **lbcd** runs RPC server with TLS enabled, and generates the `rpc.cert` and `rpc.key` under `${LBCDDIR}`, if not exist already.
#### Windows (Installed from MSI)
To interact with the RPC server, a client has to either specify the `rpc.cert`, or disable the certification verification for TLS.
Launch btcd from your Start menu.
Interact with **lbcd** RPC using `lbcctl`
#### Linux/BSD/POSIX/Source
``` sh
$ ./lbcctl --rpccert "${LBCDDIR}/rpc.cert" getblockcount
```bash
$ ./btcd
# or disable the certificate verification
$ ./lbcctl --skipverify getblockcount
1200062
```
## IRC
Interact with **lbcd** RPC using `curl`
- irc.freenode.net
- channel #btcd
- [webchat](https://webchat.freenode.net/?channels=btcd)
``` sh
$ curl --user rpcuser:rpcpass \
--cacert "${LBCDDIR}/rpc.cert" \
--data-binary '{"jsonrpc": "1.0", "id": "curltest", "method": "getblockcount", "params": []}' \
-H 'content-type: text/plain;' \
https://127.0.0.1:9245/
## Issue Tracker
# or disable the certificate verification
$ curl --user rpcuser:rpcpass \
--insecure \
--data-binary '{"jsonrpc": "1.0", "id": "curltest", "method": "getblockcount", "params": []}' \
-H 'content-type: text/plain;' \
https://127.0.0.1:9245/
```
The [integrated github issue tracker](https://github.com/btcsuite/btcd/issues)
is used for this project.
``` json
{"jsonrpc":"1.0","result":1200062,"error":null,"id":"curltest"}
```
## Documentation
### Working without TLS
The documentation is a work-in-progress. It is located in the [docs](https://github.com/btcsuite/btcd/tree/master/docs) folder.
TLS can be disabled using the `--notls` option:
## Release Verification
``` sh
$ ./lbcd --notls
```
``` sh
$ ./lbcctl --notls getblockcount
1200062
```
``` sh
$ curl --user rpcuser:rpcpass \
--data-binary '{"jsonrpc": "1.0", "id": "curltest", "method": "getblockcount", "params": []}' \
-H 'content-type: text/plain;' \
http://127.0.0.1:9245/
```
``` json
{"jsonrpc":"1.0","result":1200062,"error":null,"id":"curltest"}
```
## Using Snapshots (optional)
[Snapshots](https://snapshots.lbry.com/blockchain/) are created bi-weekly to help new users catch up current block height.
The snapshots are archived and compressed in [zstd](https://facebook.github.io/zstd/) format for it's compression ratio and speed.
Download the snapshot, and uncompress it:
``` sh
time curl -O https://snapshots.lbry.com/blockchain/lbcd_snapshot_1199527_v0.22.105_2022-07-27.tar.zst
zstd -d --stdout lbcd_snapshot_1199527_v0.22.105_2022-07-27.tar.zst | tar xf - -C "${LBCDDIR}"
```
If preferred, a user can download and uncompress the snapshot on the fly:
By the time the download is finished, the snapshots should be almost uncompressed already.
``` sh
mkdir -p "${LBCDDIR}"
time curl https://snapshots.lbry.com/blockchain/lbcd_snapshot_1199527_v0.22.105_2022-07-27.tar.zst | zstd -d --stdout | tar xf - -C "${LBCDDIR}"
# % Total % Received % Xferd Average Speed Time Time Time Current
# Dload Upload Total Spent Left Speed
# 100 64.9G 100 64.9G 0 0 37.0M 0 0:29:49 0:29:49 --:--:-- 33.0M
#
# real 29m49.962s
# user 6m53.710s
# sys 8m56.545s
```
## Working with RPCs
Using `lbcctl -l` to list available RPCs:
``` sh
$ lbcctl -l
Chain Server Commands:
addnode "addr" "add|remove|onetry"
createrawtransaction [{"txid":"value","vout":n},...] {"address":amount,...} (locktime)
debuglevel "levelspec"
decoderawtransaction "hextx"
decodescript "hexscript"
deriveaddresses "descriptor" ({"value":value})
fundrawtransaction "hextx" {"changeaddress":changeaddress,"changeposition":changeposition,"changetype":changetype,"includewatching":includewatching,"lockunspents":lockunspents,"feerate":feerate,"subtractfeefromoutputs":[subtractfeefromoutput,...],"replaceable":replaceable,"conftarget":conftarget,"estimatemode":estimatemode} (iswitness)
generate numblocks
[skipped]
Wallet Server Commands (--wallet):
addmultisigaddress nrequired ["key",...] ("account")
addwitnessaddress "address"
backupwallet "destination"
createmultisig nrequired ["key",...]
createnewaccount "account"
createwallet "walletname" (disableprivatekeys=false blank=false passphrase="" avoidreuse=false)
dumpprivkey "address"
dumpwallet "filename"
encryptwallet "passphrase"
estimatefee numblocks
estimatepriority numblocks
estimatesmartfee conftarget (estimatemode="CONSERVATIVE")
getaccount "address"
getaccountaddress "account"
getaddressesbyaccount "account"
[skipped]
```
Using `lbcctl help rpcname` to show the RPC spec:
``` sh
$ lbcctl help getblock
getblock "hash" (verbosity=1)
Returns information about a block given its hash.
Arguments:
1. hash (string, required) The hash of the block
2. verbosity (numeric, optional, default=1) Specifies whether the block data should be returned as a hex-encoded string (0), as parsed data with a slice of TXIDs (1), or as parsed data with parsed transaction data (2)
Result (verbosity=0):
"value" (string) Hex-encoded bytes of the serialized block
Result (verbosity=1):
{
"getblockverboseresultbase": { (object)
"hash": "value", (string) The hash of the block (same as provided)
"confirmations": n, (numeric) The number of confirmations
"strippedsize": n, (numeric) The size of the block without witness data
"size": n, (numeric) The size of the block
"weight": n, (numeric) The weight of the block
"height": n, (numeric) The height of the block in the block chain
"version": n, (numeric) The block version
"versionHex": "value", (string) The block version in hexadecimal
"merkleroot": "value", (string) Root hash of the merkle tree
"time": n, (numeric) The block time in seconds since 1 Jan 1970 GMT
"mediantime": n, (numeric) The median block time in seconds since 1 Jan 1970 GMT
"nonce": n, (numeric) The block nonce
"bits": "value", (string) The bits which represent the block difficulty
"difficulty": n.nnn, (numeric) The proof-of-work difficulty as a multiple of the minimum difficulty
"chainwork": "value", (string) Expected number of hashes required to produce the chain up to this block (in hex)
"previousblockhash": "value", (string) The hash of the previous block
"nextblockhash": "value", (string) The hash of the next block (only if there is one)
"nameclaimroot": "value", (string) Root hash of the claim trie
"nTx": n, (numeric) The number of transactions (aka, count of TX)
},
"tx": ["value",...], (array of string) The transaction hashes (only when verbosity=1)
}
```
## **lbcd** & **lbcwallet**
*Wallet* related functianlities and RPCs are provided by a separate programe - [**lbcwallet**](https://github.com/lbryio/lbcwallet).
Once setup, lbcwallet can serve wallet related RPCs as well as proxy lbcd RPCs to an assocated lbcd now.
It's sufficient for user to connect just the **lbcwallet** instead of both.
``` mermaid
sequenceDiagram
actor C as lbcctl
participant W as lbcwallet (port: 9244)
participant D as lbcd (port: 9245)
rect rgb(200,200,200)
Note over C,D: lbcctl getblockcount
C ->>+ D: getblockcount
D -->>- C: response
end
rect rgb(200,200,200)
Note over C,W: lbcctl --wallet balance
C ->>+ W: getbalance
W -->>- C: response
end
rect rgb(200,200,200)
Note over C,D: lbcctl --wallet getblockcount (lbcd RPC service proxied by lbcwallet)
C ->>+ W: getblockcount
W ->>+ D: getblockcount
D -->>- W: response
W -->>- C: response
end
```
While **lbcd** can run standalone as a full node, **lbcwallet** requires an associated **lbcd** instance for scanning and sync'ing block data.
``` mermaid
sequenceDiagram
participant W as lbcwallet (RPC port: 9244)
participant D as lbcd (RPC port: 9245, P2P port: 9246)
participant D2 as other lbcd node(s) (P2P port: 9246)
rect rgb(200,200,200)
Note over W,D: Asynchronous websocket notifications
W ->> D: subscribe to notifications
D -->> W: notification
D -->> W: notification
end
rect rgb(200,200,200)
Note over W,D: lbcd RPCs
W ->>+ D: getblockheader
D ->>- W: response
end
rect rgb(200,200,200)
Note over D,D2: P2P messages over port 9246
D -->> D2: P2P message
D2 -->> D: P2P message
end
```
## Data integrity
**lbcd** is not immune to data loss. It expects a clean shutdown via SIGINT or
SIGTERM. SIGKILL, immediate VM kills, and sudden power loss can cause data
corruption, thus requiring chain resynchronization for recovery.
## Security
We take security seriously. Please contact [security](mailto:security@lbry.com) regarding any security issues.
Our PGP key is [here](https://lbry.com/faq/pgp-key) if you need it.
We maintain a mailing list for notifications of upgrades, security issues,
and soft/hard forks. To join, visit [fork list](https://lbry.com/forklist)
## Contributing
Contributions to this project are welcome, encouraged, and compensated.
The [integrated github issue tracker](https://github.com/lbryio/lbcd/issues)
is used for this project. All pull requests will be considered.
<!-- ## Release Verification
Please see our [documentation on the current build/verification
process](https://github.com/btcsuite/btcd/tree/master/release) for all our
process](https://github.com/lbryio/lbcd/tree/master/release) for all our
releases for information on how to verify the integrity of published releases
using our reproducible build system.
-->
## License
btcd is licensed under the [copyfree](http://copyfree.org) ISC License.
lbcd is licensed under the [copyfree](http://copyfree.org) ISC License.

View file

@ -23,14 +23,14 @@ import (
"sync/atomic"
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/wire"
)
// AddrManager provides a concurrency safe address manager for caching potential
// peers on the bitcoin network.
type AddrManager struct {
mtx sync.Mutex
mtx sync.RWMutex
peersFile string
lookupFunc func(string) ([]net.IP, error)
rand *rand.Rand
@ -45,7 +45,7 @@ type AddrManager struct {
nTried int
nNew int
lamtx sync.Mutex
localAddresses map[string]*localAddress
localAddresses map[string]*LocalAddress
version int
}
@ -69,9 +69,9 @@ type serializedAddrManager struct {
TriedBuckets [triedBucketCount][]string
}
type localAddress struct {
na *wire.NetAddress
score AddressPriority
type LocalAddress struct {
NA *wire.NetAddress
Score AddressPriority
}
// AddressPriority type is used to describe the hierarchy of local address
@ -176,9 +176,9 @@ func (a *AddrManager) updateAddress(netAddr, srcAddr *wire.NetAddress) {
// TODO: only update addresses periodically.
// Update the last seen time and services.
// note that to prevent causing excess garbage on getaddr
// messages the netaddresses in addrmaanger are *immutable*,
// messages the netaddresses in addrmanager are *immutable*,
// if we need to change them then we replace the pointer with a
// new copy so that we don't have to copy every na for getaddr.
// new copy so that we don't have to copy every NA for getaddr.
if netAddr.Timestamp.After(ka.na.Timestamp) ||
(ka.na.Services&netAddr.Services) !=
netAddr.Services {
@ -186,7 +186,9 @@ func (a *AddrManager) updateAddress(netAddr, srcAddr *wire.NetAddress) {
naCopy := *ka.na
naCopy.Timestamp = netAddr.Timestamp
naCopy.AddService(netAddr.Services)
ka.mtx.Lock()
ka.na = &naCopy
ka.mtx.Unlock()
}
// If already in tried, we have nothing to do here.
@ -645,8 +647,8 @@ func (a *AddrManager) numAddresses() int {
// NumAddresses returns the number of addresses known to the address manager.
func (a *AddrManager) NumAddresses() int {
a.mtx.Lock()
defer a.mtx.Unlock()
a.mtx.RLock()
defer a.mtx.RUnlock()
return a.numAddresses()
}
@ -654,8 +656,8 @@ func (a *AddrManager) NumAddresses() int {
// NeedMoreAddresses returns whether or not the address manager needs more
// addresses.
func (a *AddrManager) NeedMoreAddresses() bool {
a.mtx.Lock()
defer a.mtx.Unlock()
a.mtx.RLock()
defer a.mtx.RUnlock()
return a.numAddresses() < needAddressThreshold
}
@ -685,8 +687,8 @@ func (a *AddrManager) AddressCache() []*wire.NetAddress {
// getAddresses returns all of the addresses currently found within the
// manager's address cache.
func (a *AddrManager) getAddresses() []*wire.NetAddress {
a.mtx.Lock()
defer a.mtx.Unlock()
a.mtx.RLock()
defer a.mtx.RUnlock()
addrIndexLen := len(a.addrIndex)
if addrIndexLen == 0 {
@ -753,7 +755,7 @@ func (a *AddrManager) HostToNetAddress(host string, port uint16, services wire.S
// the relevant .onion address.
func ipString(na *wire.NetAddress) string {
if IsOnionCatTor(na) {
// We know now that na.IP is long enough.
// We know now that NA.IP is long enough.
base32 := base32.StdEncoding.EncodeToString(na.IP[6:])
return strings.ToLower(base32) + ".onion"
}
@ -857,8 +859,11 @@ func (a *AddrManager) Attempt(addr *wire.NetAddress) {
return
}
// set last tried time to now
now := time.Now()
ka.mtx.Lock()
ka.attempts++
ka.lastattempt = time.Now()
ka.lastattempt = now
ka.mtx.Unlock()
}
// Connected Marks the given address as currently connected and working at the
@ -877,10 +882,12 @@ func (a *AddrManager) Connected(addr *wire.NetAddress) {
// so.
now := time.Now()
if now.After(ka.na.Timestamp.Add(time.Minute * 20)) {
// ka.na is immutable, so replace it.
// ka.NA is immutable, so replace it.
naCopy := *ka.na
naCopy.Timestamp = time.Now()
ka.mtx.Lock()
ka.na = &naCopy
ka.mtx.Unlock()
}
}
@ -899,11 +906,13 @@ func (a *AddrManager) Good(addr *wire.NetAddress) {
// ka.Timestamp is not updated here to avoid leaking information
// about currently connected peers.
now := time.Now()
ka.mtx.Lock()
ka.lastsuccess = now
ka.lastattempt = now
ka.attempts = 0
ka.mtx.Unlock() // tried and refs synchronized via a.mtx
// move to tried set, optionally evicting other addresses if neeed.
// move to tried set, optionally evicting other addresses if need.
if ka.tried {
return
}
@ -985,14 +994,16 @@ func (a *AddrManager) SetServices(addr *wire.NetAddress, services wire.ServiceFl
// Update the services if needed.
if ka.na.Services != services {
// ka.na is immutable, so replace it.
// ka.NA is immutable, so replace it.
naCopy := *ka.na
naCopy.Services = services
ka.mtx.Lock()
ka.na = &naCopy
ka.mtx.Unlock()
}
}
// AddLocalAddress adds na to the list of known local addresses to advertise
// AddLocalAddress adds NA to the list of known local addresses to advertise
// with the given priority.
func (a *AddrManager) AddLocalAddress(na *wire.NetAddress, priority AddressPriority) error {
if !IsRoutable(na) {
@ -1004,13 +1015,13 @@ func (a *AddrManager) AddLocalAddress(na *wire.NetAddress, priority AddressPrior
key := NetAddressKey(na)
la, ok := a.localAddresses[key]
if !ok || la.score < priority {
if !ok || la.Score < priority {
if ok {
la.score = priority + 1
la.Score = priority + 1
} else {
a.localAddresses[key] = &localAddress{
na: na,
score: priority,
a.localAddresses[key] = &LocalAddress{
NA: na,
Score: priority,
}
}
}
@ -1106,12 +1117,12 @@ func (a *AddrManager) GetBestLocalAddress(remoteAddr *wire.NetAddress) *wire.Net
var bestscore AddressPriority
var bestAddress *wire.NetAddress
for _, la := range a.localAddresses {
reach := getReachabilityFrom(la.na, remoteAddr)
reach := getReachabilityFrom(la.NA, remoteAddr)
if reach > bestreach ||
(reach == bestreach && la.score > bestscore) {
(reach == bestreach && la.Score > bestscore) {
bestreach = reach
bestscore = la.score
bestAddress = la.na
bestscore = la.Score
bestAddress = la.NA
}
}
if bestAddress != nil {
@ -1135,6 +1146,15 @@ func (a *AddrManager) GetBestLocalAddress(remoteAddr *wire.NetAddress) *wire.Net
return bestAddress
}
// LocalAddresses returns the list of local addresses for our node.
func (a *AddrManager) LocalAddresses() []*LocalAddress {
var addrs []*LocalAddress
for _, addr := range a.localAddresses {
addrs = append(addrs, addr)
}
return addrs
}
// New returns a new bitcoin address manager.
// Use Start to begin processing asynchronous address updates.
func New(dataDir string, lookupFunc func(string) ([]net.IP, error)) *AddrManager {
@ -1143,7 +1163,7 @@ func New(dataDir string, lookupFunc func(string) ([]net.IP, error)) *AddrManager
lookupFunc: lookupFunc,
rand: rand.New(rand.NewSource(time.Now().UnixNano())),
quit: make(chan struct{}),
localAddresses: make(map[string]*localAddress),
localAddresses: make(map[string]*LocalAddress),
version: serialisationVersion,
}
am.reset()

View file

@ -7,7 +7,7 @@ import (
"os"
"testing"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/wire"
)
// randAddr generates a *wire.NetAddress backed by a random IPv4/IPv6 address.

View file

@ -12,8 +12,8 @@ import (
"testing"
"time"
"github.com/btcsuite/btcd/addrmgr"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/addrmgr"
"github.com/lbryio/lbcd/wire"
)
// naTest is used to describe a test to be performed against the NetAddressKey
@ -34,61 +34,61 @@ var someIP = "173.194.115.66"
func addNaTests() {
// IPv4
// Localhost
addNaTest("127.0.0.1", 8333, "127.0.0.1:8333")
addNaTest("127.0.0.1", 8334, "127.0.0.1:8334")
addNaTest("127.0.0.1", 9244, "127.0.0.1:9244")
addNaTest("127.0.0.1", 9245, "127.0.0.1:9245")
// Class A
addNaTest("1.0.0.1", 8333, "1.0.0.1:8333")
addNaTest("2.2.2.2", 8334, "2.2.2.2:8334")
addNaTest("27.253.252.251", 8335, "27.253.252.251:8335")
addNaTest("123.3.2.1", 8336, "123.3.2.1:8336")
addNaTest("1.0.0.1", 9244, "1.0.0.1:9244")
addNaTest("2.2.2.2", 9245, "2.2.2.2:9245")
addNaTest("27.253.252.251", 9246, "27.253.252.251:9246")
addNaTest("123.3.2.1", 9247, "123.3.2.1:9247")
// Private Class A
addNaTest("10.0.0.1", 8333, "10.0.0.1:8333")
addNaTest("10.1.1.1", 8334, "10.1.1.1:8334")
addNaTest("10.2.2.2", 8335, "10.2.2.2:8335")
addNaTest("10.10.10.10", 8336, "10.10.10.10:8336")
addNaTest("10.0.0.1", 9244, "10.0.0.1:9244")
addNaTest("10.1.1.1", 9245, "10.1.1.1:9245")
addNaTest("10.2.2.2", 9246, "10.2.2.2:9246")
addNaTest("10.10.10.10", 9247, "10.10.10.10:9247")
// Class B
addNaTest("128.0.0.1", 8333, "128.0.0.1:8333")
addNaTest("129.1.1.1", 8334, "129.1.1.1:8334")
addNaTest("180.2.2.2", 8335, "180.2.2.2:8335")
addNaTest("191.10.10.10", 8336, "191.10.10.10:8336")
addNaTest("128.0.0.1", 9244, "128.0.0.1:9244")
addNaTest("129.1.1.1", 9245, "129.1.1.1:9245")
addNaTest("180.2.2.2", 9246, "180.2.2.2:9246")
addNaTest("191.10.10.10", 9247, "191.10.10.10:9247")
// Private Class B
addNaTest("172.16.0.1", 8333, "172.16.0.1:8333")
addNaTest("172.16.1.1", 8334, "172.16.1.1:8334")
addNaTest("172.16.2.2", 8335, "172.16.2.2:8335")
addNaTest("172.16.172.172", 8336, "172.16.172.172:8336")
addNaTest("172.16.0.1", 9244, "172.16.0.1:9244")
addNaTest("172.16.1.1", 9245, "172.16.1.1:9245")
addNaTest("172.16.2.2", 9246, "172.16.2.2:9246")
addNaTest("172.16.172.172", 9247, "172.16.172.172:9247")
// Class C
addNaTest("193.0.0.1", 8333, "193.0.0.1:8333")
addNaTest("200.1.1.1", 8334, "200.1.1.1:8334")
addNaTest("205.2.2.2", 8335, "205.2.2.2:8335")
addNaTest("223.10.10.10", 8336, "223.10.10.10:8336")
addNaTest("193.0.0.1", 9244, "193.0.0.1:9244")
addNaTest("200.1.1.1", 9245, "200.1.1.1:9245")
addNaTest("205.2.2.2", 9246, "205.2.2.2:9246")
addNaTest("223.10.10.10", 9247, "223.10.10.10:9247")
// Private Class C
addNaTest("192.168.0.1", 8333, "192.168.0.1:8333")
addNaTest("192.168.1.1", 8334, "192.168.1.1:8334")
addNaTest("192.168.2.2", 8335, "192.168.2.2:8335")
addNaTest("192.168.192.192", 8336, "192.168.192.192:8336")
addNaTest("192.168.0.1", 9244, "192.168.0.1:9244")
addNaTest("192.168.1.1", 9245, "192.168.1.1:9245")
addNaTest("192.168.2.2", 9246, "192.168.2.2:9246")
addNaTest("192.168.192.192", 9247, "192.168.192.192:9247")
// IPv6
// Localhost
addNaTest("::1", 8333, "[::1]:8333")
addNaTest("fe80::1", 8334, "[fe80::1]:8334")
addNaTest("::1", 9244, "[::1]:9244")
addNaTest("fe80::1", 9245, "[fe80::1]:9245")
// Link-local
addNaTest("fe80::1:1", 8333, "[fe80::1:1]:8333")
addNaTest("fe91::2:2", 8334, "[fe91::2:2]:8334")
addNaTest("fea2::3:3", 8335, "[fea2::3:3]:8335")
addNaTest("feb3::4:4", 8336, "[feb3::4:4]:8336")
addNaTest("fe80::1:1", 9244, "[fe80::1:1]:9244")
addNaTest("fe91::2:2", 9245, "[fe91::2:2]:9245")
addNaTest("fea2::3:3", 9246, "[fea2::3:3]:9246")
addNaTest("feb3::4:4", 9247, "[feb3::4:4]:9247")
// Site-local
addNaTest("fec0::1:1", 8333, "[fec0::1:1]:8333")
addNaTest("fed1::2:2", 8334, "[fed1::2:2]:8334")
addNaTest("fee2::3:3", 8335, "[fee2::3:3]:8335")
addNaTest("fef3::4:4", 8336, "[fef3::4:4]:8336")
addNaTest("fec0::1:1", 9244, "[fec0::1:1]:9244")
addNaTest("fed1::2:2", 9245, "[fed1::2:2]:9245")
addNaTest("fee2::3:3", 9246, "[fee2::3:3]:9246")
addNaTest("fef3::4:4", 9247, "[fef3::4:4]:9247")
}
func addNaTest(ip string, port uint16, want string) {
@ -119,7 +119,7 @@ func TestAddAddressByIP(t *testing.T) {
err error
}{
{
someIP + ":8333",
someIP + ":9244",
nil,
},
{
@ -127,7 +127,7 @@ func TestAddAddressByIP(t *testing.T) {
addrErr,
},
{
someIP[:12] + ":8333",
someIP[:12] + ":9244",
fmtErr,
},
{
@ -212,7 +212,7 @@ func TestAttempt(t *testing.T) {
n := addrmgr.New("testattempt", lookupFunc)
// Add a new address and get it
err := n.AddAddressByIP(someIP + ":8333")
err := n.AddAddressByIP(someIP + ":9244")
if err != nil {
t.Fatalf("Adding address failed: %v", err)
}
@ -234,7 +234,7 @@ func TestConnected(t *testing.T) {
n := addrmgr.New("testconnected", lookupFunc)
// Add a new address and get it
err := n.AddAddressByIP(someIP + ":8333")
err := n.AddAddressByIP(someIP + ":9244")
if err != nil {
t.Fatalf("Adding address failed: %v", err)
}
@ -261,14 +261,14 @@ func TestNeedMoreAddresses(t *testing.T) {
var err error
for i := 0; i < addrsToAdd; i++ {
s := fmt.Sprintf("%d.%d.173.147:8333", i/128+60, i%128+60)
s := fmt.Sprintf("%d.%d.173.147:9244", i/128+60, i%128+60)
addrs[i], err = n.DeserializeNetAddress(s, wire.SFNodeNetwork)
if err != nil {
t.Errorf("Failed to turn %s into an address: %v", s, err)
}
}
srcAddr := wire.NewNetAddressIPPort(net.IPv4(173, 144, 173, 111), 8333, 0)
srcAddr := wire.NewNetAddressIPPort(net.IPv4(173, 144, 173, 111), 9244, 0)
n.AddAddresses(addrs, srcAddr)
numAddrs := n.NumAddresses()
@ -289,14 +289,14 @@ func TestGood(t *testing.T) {
var err error
for i := 0; i < addrsToAdd; i++ {
s := fmt.Sprintf("%d.173.147.%d:8333", i/64+60, i%64+60)
s := fmt.Sprintf("%d.173.147.%d:9244", i/64+60, i%64+60)
addrs[i], err = n.DeserializeNetAddress(s, wire.SFNodeNetwork)
if err != nil {
t.Errorf("Failed to turn %s into an address: %v", s, err)
}
}
srcAddr := wire.NewNetAddressIPPort(net.IPv4(173, 144, 173, 111), 8333, 0)
srcAddr := wire.NewNetAddressIPPort(net.IPv4(173, 144, 173, 111), 9244, 0)
n.AddAddresses(addrs, srcAddr)
for _, addr := range addrs {
@ -323,7 +323,7 @@ func TestGetAddress(t *testing.T) {
}
// Add a new address and get it
err := n.AddAddressByIP(someIP + ":8333")
err := n.AddAddressByIP(someIP + ":9244")
if err != nil {
t.Fatalf("Adding address failed: %v", err)
}

View file

@ -5,7 +5,7 @@
/*
Package addrmgr implements concurrency safe Bitcoin address manager.
Address Manager Overview
# Address Manager Overview
In order maintain the peer-to-peer Bitcoin network, there needs to be a source
of addresses to connect to as nodes come and go. The Bitcoin protocol provides

View file

@ -7,7 +7,7 @@ package addrmgr
import (
"time"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/wire"
)
func TstKnownAddressIsBad(ka *KnownAddress) bool {

View file

@ -5,14 +5,16 @@
package addrmgr
import (
"sync"
"time"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/wire"
)
// KnownAddress tracks information about a known network address that is used
// to determine how viable an address is.
type KnownAddress struct {
mtx sync.RWMutex // na and lastattempt
na *wire.NetAddress
srcAddr *wire.NetAddress
attempts int
@ -25,19 +27,28 @@ type KnownAddress struct {
// NetAddress returns the underlying wire.NetAddress associated with the
// known address.
func (ka *KnownAddress) NetAddress() *wire.NetAddress {
ka.mtx.RLock()
defer ka.mtx.RUnlock()
return ka.na
}
// LastAttempt returns the last time the known address was attempted.
func (ka *KnownAddress) LastAttempt() time.Time {
ka.mtx.RLock()
defer ka.mtx.RUnlock()
return ka.lastattempt
}
// Services returns the services supported by the peer with the known address.
func (ka *KnownAddress) Services() wire.ServiceFlag {
ka.mtx.RLock()
defer ka.mtx.RUnlock()
return ka.na.Services
}
// The unexported methods, chance and isBad, are used from within AddrManager
// where KnownAddress field access is synchronized via it's own Mutex.
// chance returns the selection probability for a known address. The priority
// depends upon how recently the address has been seen, how recently it was last
// attempted and how often attempts to connect to it have failed.

View file

@ -9,8 +9,8 @@ import (
"testing"
"time"
"github.com/btcsuite/btcd/addrmgr"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/addrmgr"
"github.com/lbryio/lbcd/wire"
)
func TestChance(t *testing.T) {

View file

@ -8,7 +8,7 @@ import (
"fmt"
"net"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/wire"
)
var (

View file

@ -8,8 +8,8 @@ import (
"net"
"testing"
"github.com/btcsuite/btcd/addrmgr"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/addrmgr"
"github.com/lbryio/lbcd/wire"
)
// TestIPTypes ensures the various functions which determine the type of an IP
@ -39,7 +39,7 @@ func TestIPTypes(t *testing.T) {
rfc4193, rfc4380, rfc4843, rfc4862, rfc5737, rfc6052, rfc6145, rfc6598,
local, valid, routable bool) ipTest {
nip := net.ParseIP(ip)
na := *wire.NewNetAddressIPPort(nip, 8333, wire.SFNodeNetwork)
na := *wire.NewNetAddressIPPort(nip, 9246, wire.SFNodeNetwork)
test := ipTest{na, rfc1918, rfc2544, rfc3849, rfc3927, rfc3964, rfc4193, rfc4380,
rfc4843, rfc4862, rfc5737, rfc6052, rfc6145, rfc6598, local, valid, routable}
return test
@ -192,7 +192,7 @@ func TestGroupKey(t *testing.T) {
for i, test := range tests {
nip := net.ParseIP(test.ip)
na := *wire.NewNetAddressIPPort(nip, 8333, wire.SFNodeNetwork)
na := *wire.NewNetAddressIPPort(nip, 9246, wire.SFNodeNetwork)
if key := addrmgr.GroupKey(&na); key != test.expected {
t.Errorf("TestGroupKey #%d (%s): unexpected group key "+
"- got '%s', want '%s'", i, test.name,

View file

@ -1,30 +1,9 @@
blockchain
==========
[![Build Status](http://img.shields.io/travis/btcsuite/btcd.svg)](https://travis-ci.org/btcsuite/btcd)
[![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](http://godoc.org/github.com/btcsuite/btcd/blockchain)
Package blockchain implements bitcoin block handling and chain selection rules.
The test coverage is currently only around 60%, but will be increasing over
time. See `test_coverage.txt` for the gocov coverage report. Alternatively, if
you are running a POSIX OS, you can run the `cov_report.sh` script for a
real-time report. Package blockchain is licensed under the liberal ISC license.
There is an associated blog post about the release of this package
[here](https://blog.conformal.com/btcchain-the-bitcoin-chain-package-from-bctd/).
This package has intentionally been designed so it can be used as a standalone
package for any projects needing to handle processing of blocks into the bitcoin
block chain.
## Installation and Updating
```bash
$ go get -u github.com/btcsuite/btcd/blockchain
```
## Bitcoin Chain Processing Overview
### Bitcoin Chain Processing Overview
Before a block is allowed into the block chain, it must go through an intensive
series of validation rules. The following list serves as a general outline of
@ -58,46 +37,3 @@ is by no means exhaustive:
- Run the transaction scripts to verify the spender is allowed to spend the
coins
- Insert the block into the block database
## Examples
* [ProcessBlock Example](http://godoc.org/github.com/btcsuite/btcd/blockchain#example-BlockChain-ProcessBlock)
Demonstrates how to create a new chain instance and use ProcessBlock to
attempt to add a block to the chain. This example intentionally
attempts to insert a duplicate genesis block to illustrate how an invalid
block is handled.
* [CompactToBig Example](http://godoc.org/github.com/btcsuite/btcd/blockchain#example-CompactToBig)
Demonstrates how to convert the compact "bits" in a block header which
represent the target difficulty to a big integer and display it using the
typical hex notation.
* [BigToCompact Example](http://godoc.org/github.com/btcsuite/btcd/blockchain#example-BigToCompact)
Demonstrates how to convert a target difficulty into the
compact "bits" in a block header which represent that target difficulty.
## GPG Verification Key
All official release tags are signed by Conformal so users can ensure the code
has not been tampered with and is coming from the btcsuite developers. To
verify the signature perform the following:
- Download the public key from the Conformal website at
https://opensource.conformal.com/GIT-GPG-KEY-conformal.txt
- Import the public key into your GPG keyring:
```bash
gpg --import GIT-GPG-KEY-conformal.txt
```
- Verify the release tag with the following command where `TAG_NAME` is a
placeholder for the specific tag:
```bash
git tag -v TAG_NAME
```
## License
Package blockchain is licensed under the [copyfree](http://copyfree.org) ISC
License.

View file

@ -7,8 +7,8 @@ package blockchain
import (
"fmt"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/database"
btcutil "github.com/lbryio/lbcutil"
)
// maybeAcceptBlock potentially accepts a block into the block chain and, if
@ -84,9 +84,11 @@ func (b *BlockChain) maybeAcceptBlock(block *btcutil.Block, flags BehaviorFlags)
// Notify the caller that the new block was accepted into the block
// chain. The caller would typically want to react by relaying the
// inventory to other peers.
b.notificationSendLock.Lock()
defer b.notificationSendLock.Unlock()
b.chainLock.Unlock()
defer b.chainLock.Lock()
b.sendNotification(NTBlockAccepted, block)
b.chainLock.Lock()
return isMainChain, nil
}

View file

@ -6,14 +6,12 @@ package blockchain
import (
"testing"
"github.com/btcsuite/btcutil"
)
// BenchmarkIsCoinBase performs a simple benchmark against the IsCoinBase
// function.
func BenchmarkIsCoinBase(b *testing.B) {
tx, _ := btcutil.NewBlock(&Block100000).Tx(1)
tx, _ := GetBlock100000().Tx(1)
b.ResetTimer()
for i := 0; i < b.N; i++ {
IsCoinBase(tx)
@ -23,9 +21,9 @@ func BenchmarkIsCoinBase(b *testing.B) {
// BenchmarkIsCoinBaseTx performs a simple benchmark against the IsCoinBaseTx
// function.
func BenchmarkIsCoinBaseTx(b *testing.B) {
tx := Block100000.Transactions[1]
tx, _ := GetBlock100000().Tx(1)
b.ResetTimer()
for i := 0; i < b.N; i++ {
IsCoinBaseTx(tx)
IsCoinBaseTx(tx.MsgTx())
}
}

View file

@ -10,10 +10,10 @@ import (
"sync"
"time"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/database"
"github.com/lbryio/lbcd/wire"
)
// blockStatus is a bit field representing the validation state of the block.
@ -93,6 +93,7 @@ type blockNode struct {
nonce uint32
timestamp int64
merkleRoot chainhash.Hash
claimTrie chainhash.Hash
// status is a bitfield representing the validation state of the block. The
// status field, unlike the other fields, may be written to and so should
@ -114,6 +115,7 @@ func initBlockNode(node *blockNode, blockHeader *wire.BlockHeader, parent *block
nonce: blockHeader.Nonce,
timestamp: blockHeader.Timestamp.Unix(),
merkleRoot: blockHeader.MerkleRoot,
claimTrie: blockHeader.ClaimTrie,
}
if parent != nil {
node.parent = parent
@ -144,6 +146,7 @@ func (node *blockNode) Header() wire.BlockHeader {
Version: node.version,
PrevBlock: *prevHash,
MerkleRoot: node.merkleRoot,
ClaimTrie: node.claimTrie,
Timestamp: time.Unix(node.timestamp, 0),
Bits: node.bits,
Nonce: node.nonce,

View file

@ -8,15 +8,18 @@ package blockchain
import (
"container/list"
"fmt"
"math/big"
"sync"
"time"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/database"
"github.com/lbryio/lbcd/txscript"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
"github.com/lbryio/lbcd/claimtrie"
)
const (
@ -34,8 +37,9 @@ const (
// from the block being located.
//
// For example, assume a block chain with a side chain as depicted below:
// genesis -> 1 -> 2 -> ... -> 15 -> 16 -> 17 -> 18
// \-> 16a -> 17a
//
// genesis -> 1 -> 2 -> ... -> 15 -> 16 -> 17 -> 18
// \-> 16a -> 17a
//
// The block locator for block 17a would be the hashes of blocks:
// [17a 16a 15 14 13 12 11 10 9 8 7 6 4 genesis]
@ -114,6 +118,12 @@ type BlockChain struct {
// fields in this struct below this point.
chainLock sync.RWMutex
// notificationSendLock helps us only process one block at a time.
// It's definitely a hack. DCRD has much better structure in this regard.
// Without this you will get an error if you invalidate a block and then generate more right after.
// Taken from https://github.com/gcash/bchd/pull/308
notificationSendLock sync.Mutex
// These fields are related to the memory block index. They both have
// their own locks, however they are often also protected by the chain
// lock to help prevent logic races when blocks are being processed.
@ -174,16 +184,14 @@ type BlockChain struct {
//
// unknownRulesWarned refers to warnings due to unknown rules being
// activated.
//
// unknownVersionsWarned refers to warnings due to unknown versions
// being mined.
unknownRulesWarned bool
unknownVersionsWarned bool
unknownRulesWarned bool
// The notifications field stores a slice of callbacks to be executed on
// certain blockchain events.
notificationsLock sync.RWMutex
notifications []NotificationCallback
claimTrie *claimtrie.ClaimTrie
}
// HaveBlock returns whether or not the chain instance has the block represented
@ -199,6 +207,15 @@ func (b *BlockChain) HaveBlock(hash *chainhash.Hash) (bool, error) {
return exists || b.IsKnownOrphan(hash), nil
}
// GetWarnings returns a bool for whether unknownRules
// has been warned.
func (b *BlockChain) GetWarnings() bool {
b.chainLock.RLock()
defer b.chainLock.RUnlock()
return b.unknownRulesWarned
}
// IsKnownOrphan returns whether the passed hash is currently a known orphan.
// Keep in mind that only a limited number of orphans are held onto for a
// limited amount of time, so this function must not be used as an absolute
@ -472,7 +489,7 @@ func (b *BlockChain) calcSequenceLock(node *blockNode, tx *btcutil.Tx, utxoView
// LockTimeToSequence converts the passed relative locktime to a sequence
// number in accordance to BIP-68.
// See: https://github.com/bitcoin/bips/blob/master/bip-0068.mediawiki
// * (Compatibility)
// - (Compatibility)
func LockTimeToSequence(isSeconds bool, locktime uint32) uint32 {
// If we're expressing the relative lock time in blocks, then the
// corresponding sequence number is simply the desired input age.
@ -574,19 +591,21 @@ func (b *BlockChain) connectBlock(node *blockNode, block *btcutil.Block,
"spent transaction out information")
}
// No warnings about unknown rules or versions until the chain is
// current.
if b.isCurrent() {
// No warnings about unknown rules until the chain is current.
current := b.isCurrent()
if current {
// Warn if any unknown new rules are either about to activate or
// have already been activated.
if err := b.warnUnknownRuleActivations(node); err != nil {
return err
}
}
// Warn if a high enough percentage of the last blocks have
// unexpected versions.
if err := b.warnUnknownVersions(node); err != nil {
return err
// Handle LBRY Claim Scripts
if b.claimTrie != nil {
shouldFlush := current && b.chainParams.Net != wire.TestNet
if err := b.ParseClaimScripts(block, node, view, shouldFlush); err != nil {
return ruleError(ErrBadClaimTrie, err.Error())
}
}
@ -672,9 +691,11 @@ func (b *BlockChain) connectBlock(node *blockNode, block *btcutil.Block,
// Notify the caller that the block was connected to the main chain.
// The caller would typically want to react with actions such as
// updating wallets.
b.notificationSendLock.Lock()
defer b.notificationSendLock.Unlock()
b.chainLock.Unlock()
defer b.chainLock.Lock()
b.sendNotification(NTBlockConnected, block)
b.chainLock.Lock()
return nil
}
@ -772,6 +793,12 @@ func (b *BlockChain) disconnectBlock(node *blockNode, block *btcutil.Block, view
return err
}
if b.claimTrie != nil {
if err = b.claimTrie.ResetHeight(node.parent.height); err != nil {
return err
}
}
// Prune fully spent entries and mark all entries in the view unmodified
// now that the modifications have been committed to the database.
view.commit()
@ -791,9 +818,11 @@ func (b *BlockChain) disconnectBlock(node *blockNode, block *btcutil.Block, view
// Notify the caller that the block was disconnected from the main
// chain. The caller would typically want to react with actions such as
// updating wallets.
b.notificationSendLock.Lock()
defer b.notificationSendLock.Unlock()
b.chainLock.Unlock()
defer b.chainLock.Lock()
b.sendNotification(NTBlockDisconnected, block)
b.chainLock.Lock()
return nil
}
@ -977,6 +1006,7 @@ func (b *BlockChain) reorganizeChain(detachNodes, attachNodes *list.List) error
err = b.checkConnectBlock(n, block, view, nil)
if err != nil {
if _, ok := err.(RuleError); ok {
b.index.UnsetStatusFlags(n, statusValid)
b.index.SetStatusFlags(n, statusValidateFailed)
for de := e.Next(); de != nil; de = de.Next() {
dn := de.Value.(*blockNode)
@ -1078,8 +1108,8 @@ func (b *BlockChain) reorganizeChain(detachNodes, attachNodes *list.List) error
// a reorganization to become the main chain).
//
// The flags modify the behavior of this function as follows:
// - BFFastAdd: Avoids several expensive transaction validation operations.
// This is useful when using checkpoints.
// - BFFastAdd: Avoids several expensive transaction validation operations.
// This is useful when using checkpoints.
//
// This function MUST be called with the chain state lock held (for writes).
func (b *BlockChain) connectBestChain(node *blockNode, block *btcutil.Block, flags BehaviorFlags) (bool, error) {
@ -1114,6 +1144,7 @@ func (b *BlockChain) connectBestChain(node *blockNode, block *btcutil.Block, fla
if err == nil {
b.index.SetStatusFlags(node, statusValid)
} else if _, ok := err.(RuleError); ok {
b.index.UnsetStatusFlags(node, statusValid)
b.index.SetStatusFlags(node, statusValidateFailed)
} else {
return false, err
@ -1148,6 +1179,7 @@ func (b *BlockChain) connectBestChain(node *blockNode, block *btcutil.Block, fla
// that status of the block as invalid and flush the
// index state to disk before returning with the error.
if _, ok := err.(RuleError); ok {
b.index.UnsetStatusFlags(node, statusValid)
b.index.SetStatusFlags(
node, statusValidateFailed,
)
@ -1218,8 +1250,8 @@ func (b *BlockChain) connectBestChain(node *blockNode, block *btcutil.Block, fla
// isCurrent returns whether or not the chain believes it is current. Several
// factors are used to guess, but the key factors that allow the chain to
// believe it is current are:
// - Latest block height is after the latest checkpoint (if enabled)
// - Latest block has a timestamp newer than 24 hours ago
// - Latest block height is after the latest checkpoint (if enabled)
// - Latest block has a timestamp newer than ~6 hours ago (as LBRY block time is one fourth of bitcoin)
//
// This function MUST be called with the chain state lock held (for reads).
func (b *BlockChain) isCurrent() bool {
@ -1230,20 +1262,20 @@ func (b *BlockChain) isCurrent() bool {
return false
}
// Not current if the latest best block has a timestamp before 24 hours
// Not current if the latest best block has a timestamp before 7 hours
// ago.
//
// The chain appears to be current if none of the checks reported
// otherwise.
minus24Hours := b.timeSource.AdjustedTime().Add(-24 * time.Hour).Unix()
return b.bestChain.Tip().timestamp >= minus24Hours
hours := b.timeSource.AdjustedTime().Add(-7 * time.Hour).Unix()
return b.bestChain.Tip().timestamp >= hours
}
// IsCurrent returns whether or not the chain believes it is current. Several
// factors are used to guess, but the key factors that allow the chain to
// believe it is current are:
// - Latest block height is after the latest checkpoint (if enabled)
// - Latest block has a timestamp newer than 24 hours ago
// - Latest block height is after the latest checkpoint (if enabled)
// - Latest block has a timestamp newer than 24 hours ago
//
// This function is safe for concurrent access.
func (b *BlockChain) IsCurrent() bool {
@ -1343,6 +1375,57 @@ func (b *BlockChain) BlockHashByHeight(blockHeight int32) (*chainhash.Hash, erro
return &node.hash, nil
}
// BlockAttributes desribes a Block in relation to others on the main chain.
type BlockAttributes struct {
Height int32
Confirmations int32
MedianTime time.Time
ChainWork *big.Int
PrevHash *chainhash.Hash
NextHash *chainhash.Hash
}
// BlockAttributesByHash returns BlockAttributes for the block with the given hash
// relative to other blocks in the main chain. A BestState snapshot describing
// the main chain is also returned for convenience.
//
// This function is safe for concurrent access.
func (b *BlockChain) BlockAttributesByHash(hash *chainhash.Hash, prevHash *chainhash.Hash) (
attrs *BlockAttributes, best *BestState, err error) {
best = b.BestSnapshot()
node := b.index.LookupNode(hash)
if node == nil {
str := fmt.Sprintf("block %s not found", hash)
return nil, best, errNotInMainChain(str)
}
attrs = &BlockAttributes{
Height: node.height,
Confirmations: 1 + best.Height - node.height,
MedianTime: node.CalcPastMedianTime(),
ChainWork: node.workSum,
}
if !b.bestChain.Contains(node) {
attrs.Confirmations = -1
}
// Populate prev block hash if there is one.
if node.height > 0 {
attrs.PrevHash = prevHash
}
// Populate next block hash if there is one.
if node.height < best.Height {
nextHash, err := b.BlockHashByHeight(node.height + 1)
if err != nil {
return nil, best, err
}
attrs.NextHash = nextHash
}
return attrs, best, nil
}
// HeightRange returns a range of block hashes for the given start and end
// heights. It is inclusive of the start height and exclusive of the end
// height. The end height will be limited to the current main chain height.
@ -1478,11 +1561,11 @@ func (b *BlockChain) IntervalBlockHashes(endHash *chainhash.Hash, interval int,
//
// In addition, there are two special cases:
//
// - When no locators are provided, the stop hash is treated as a request for
// that block, so it will either return the node associated with the stop hash
// if it is known, or nil if it is unknown
// - When locators are provided, but none of them are known, nodes starting
// after the genesis block will be returned
// - When no locators are provided, the stop hash is treated as a request for
// that block, so it will either return the node associated with the stop hash
// if it is known, or nil if it is unknown
// - When locators are provided, but none of them are known, nodes starting
// after the genesis block will be returned
//
// This is primarily a helper function for the locateBlocks and locateHeaders
// functions.
@ -1566,11 +1649,11 @@ func (b *BlockChain) locateBlocks(locator BlockLocator, hashStop *chainhash.Hash
//
// In addition, there are two special cases:
//
// - When no locators are provided, the stop hash is treated as a request for
// that block, so it will either return the stop hash itself if it is known,
// or nil if it is unknown
// - When locators are provided, but none of them are known, hashes starting
// after the genesis block will be returned
// - When no locators are provided, the stop hash is treated as a request for
// that block, so it will either return the stop hash itself if it is known,
// or nil if it is unknown
// - When locators are provided, but none of them are known, hashes starting
// after the genesis block will be returned
//
// This function is safe for concurrent access.
func (b *BlockChain) LocateBlocks(locator BlockLocator, hashStop *chainhash.Hash, maxHashes uint32) []chainhash.Hash {
@ -1611,11 +1694,11 @@ func (b *BlockChain) locateHeaders(locator BlockLocator, hashStop *chainhash.Has
//
// In addition, there are two special cases:
//
// - When no locators are provided, the stop hash is treated as a request for
// that header, so it will either return the header for the stop hash itself
// if it is known, or nil if it is unknown
// - When locators are provided, but none of them are known, headers starting
// after the genesis block will be returned
// - When no locators are provided, the stop hash is treated as a request for
// that header, so it will either return the header for the stop hash itself
// if it is known, or nil if it is unknown
// - When locators are provided, but none of them are known, headers starting
// after the genesis block will be returned
//
// This function is safe for concurrent access.
func (b *BlockChain) LocateHeaders(locator BlockLocator, hashStop *chainhash.Hash) []wire.BlockHeader {
@ -1625,6 +1708,121 @@ func (b *BlockChain) LocateHeaders(locator BlockLocator, hashStop *chainhash.Has
return headers
}
// InvalidateBlock takes a block hash and invalidates it.
//
// This function is safe for concurrent access.
func (b *BlockChain) InvalidateBlock(hash *chainhash.Hash) error {
b.chainLock.Lock()
defer b.chainLock.Unlock()
return b.invalidateBlock(hash)
}
// invalidateBlock takes a block hash and invalidates it.
func (b *BlockChain) invalidateBlock(hash *chainhash.Hash) error {
node := b.index.LookupNode(hash)
if node == nil {
err := fmt.Errorf("block %s is not known", hash)
return err
}
// No need to invalidate if its already invalid.
if node.status.KnownInvalid() {
err := fmt.Errorf("block %s is already invalid", hash)
return err
}
if node.parent == nil {
err := fmt.Errorf("block %s has no parent", hash)
return err
}
b.index.SetStatusFlags(node, statusValidateFailed)
b.index.UnsetStatusFlags(node, statusValid)
detachNodes, attachNodes := b.getReorganizeNodes(node.parent)
err := b.reorganizeChain(detachNodes, attachNodes)
if err != nil {
return err
}
for i, e := 0, detachNodes.Front(); e != nil; i, e = i+1, e.Next() {
n := e.Value.(*blockNode)
b.index.SetStatusFlags(n, statusInvalidAncestor)
b.index.UnsetStatusFlags(n, statusValid)
}
if writeErr := b.index.flushToDB(); writeErr != nil {
log.Warnf("Error flushing block index changes to disk: %v", writeErr)
}
return nil
}
// ReconsiderBlock takes a block hash and allows it to be revalidated.
//
// This function is safe for concurrent access.
func (b *BlockChain) ReconsiderBlock(hash *chainhash.Hash) error {
return b.reconsiderBlock(hash)
}
// reconsiderBlock takes a block hash and allows it to be revalidated.
func (b *BlockChain) reconsiderBlock(hash *chainhash.Hash) error {
node := b.index.LookupNode(hash)
if node == nil {
err := fmt.Errorf("block %s is not known", hash)
return err
}
// No need to reconsider, it is already valid.
if node.status.KnownValid() && !node.status.KnownInvalid() { // second clause works around old bug
err := fmt.Errorf("block %s is already valid", hash)
return err
}
// Keep a reference to the first node in the chain of invalid
// blocks so we can reprocess after status flags are updated.
firstNode := node
// Find previous node to the point where the blocks are valid again.
for n := node; n.status.KnownInvalid(); n = n.parent {
b.index.UnsetStatusFlags(n, statusInvalidAncestor)
b.index.UnsetStatusFlags(n, statusValidateFailed)
firstNode = n
}
// do we need an rlock on chainstate for this section?
var blk *btcutil.Block
err := b.db.View(func(dbTx database.Tx) error {
var err error
blk, err = dbFetchBlockByNode(dbTx, firstNode)
return err
})
if err != nil {
return err
}
// Process it all again. This will take care of the
// orphans as well.
_, _, err = b.ProcessBlock(blk, BFNoDupBlockCheck)
if err != nil {
return err
}
if writeErr := b.index.flushToDB(); writeErr != nil {
log.Warnf("Error flushing block index changes to disk: %v", writeErr)
}
return nil
}
// ClaimTrie returns the claimTrie associated wit hthe chain.
func (b *BlockChain) ClaimTrie() *claimtrie.ClaimTrie {
return b.claimTrie
}
// IndexManager provides a generic interface that the is called when blocks are
// connected and disconnected to and from the tip of the main chain for the
// purpose of supporting optional indexes.
@ -1711,6 +1909,8 @@ type Config struct {
// This field can be nil if the caller is not interested in using a
// signature cache.
HashCache *txscript.HashCache
ClaimTrie *claimtrie.ClaimTrie
}
// New returns a BlockChain instance using the provided configuration details.
@ -1747,7 +1947,6 @@ func New(config *Config) (*BlockChain, error) {
params := config.ChainParams
targetTimespan := int64(params.TargetTimespan / time.Second)
targetTimePerBlock := int64(params.TargetTimePerBlock / time.Second)
adjustmentFactor := params.RetargetAdjustmentFactor
b := BlockChain{
checkpoints: config.Checkpoints,
checkpointsByHeight: checkpointsByHeight,
@ -1756,8 +1955,8 @@ func New(config *Config) (*BlockChain, error) {
timeSource: config.TimeSource,
sigCache: config.SigCache,
indexManager: config.IndexManager,
minRetargetTimespan: targetTimespan / adjustmentFactor,
maxRetargetTimespan: targetTimespan * adjustmentFactor,
minRetargetTimespan: targetTimespan - (targetTimespan / 8),
maxRetargetTimespan: targetTimespan + (targetTimespan / 2),
blocksPerRetarget: int32(targetTimespan / targetTimePerBlock),
index: newBlockIndex(config.DB, params),
hashCache: config.HashCache,
@ -1766,6 +1965,7 @@ func New(config *Config) (*BlockChain, error) {
prevOrphans: make(map[chainhash.Hash][]*orphanBlock),
warningCaches: newThresholdCaches(vbNumBits),
deploymentCaches: newThresholdCaches(chaincfg.DefinedDeployments),
claimTrie: config.ClaimTrie,
}
// Initialize the chain state from the passed database. When the db
@ -1775,6 +1975,20 @@ func New(config *Config) (*BlockChain, error) {
return nil, err
}
// Helper function to insert the output in genesis block in to the
// transaction database.
fn := func(dbTx database.Tx) error {
genesisBlock := btcutil.NewBlock(b.chainParams.GenesisBlock)
view := NewUtxoViewpoint()
if err := view.connectTransactions(genesisBlock, nil); err != nil {
return err
}
return dbPutUtxoView(dbTx, view)
}
if err := b.db.Update(fn); err != nil {
return nil, err
}
// Perform any upgrades to the various chain-specific buckets as needed.
if err := b.maybeUpgradeDbBuckets(config.Interrupt); err != nil {
return nil, err
@ -1794,6 +2008,14 @@ func New(config *Config) (*BlockChain, error) {
return nil, err
}
if b.claimTrie != nil {
err := rebuildMissingClaimTrieData(&b, config.Interrupt)
if err != nil {
b.claimTrie.Close()
return nil, err
}
}
bestNode := b.bestChain.Tip()
log.Infof("Chain state (height %d, hash %v, totaltx %d, work %v)",
bestNode.height, bestNode.hash, b.stateSnapshot.TotalTxns,
@ -1801,3 +2023,63 @@ func New(config *Config) (*BlockChain, error) {
return &b, nil
}
func rebuildMissingClaimTrieData(b *BlockChain, done <-chan struct{}) error {
target := b.bestChain.Height()
if b.claimTrie.Height() == target {
return nil
}
if b.claimTrie.Height() > target {
return b.claimTrie.ResetHeight(target)
}
start := time.Now()
lastReport := time.Now()
// TODO: move this view inside the loop (or recreate it every 5 sec.)
// as accumulating all inputs has potential to use a huge amount of RAM
// but we need to get the spent inputs working for that to be possible
view := NewUtxoViewpoint()
for h := int32(0); h < target; h++ {
select {
case <-done:
return fmt.Errorf("rebuild unfinished at height %d", b.claimTrie.Height())
default:
}
n := b.bestChain.NodeByHeight(h + 1)
var block *btcutil.Block
err := b.db.View(func(dbTx database.Tx) error {
var err error
block, err = dbFetchBlockByNode(dbTx, n)
return err
})
if err != nil {
return err
}
err = view.fetchInputUtxos(b.db, block)
if err != nil {
return err
}
err = view.connectTransactions(block, nil)
if err != nil {
return err
}
if h >= b.claimTrie.Height() {
err = b.ParseClaimScripts(block, n, view, false)
if err != nil {
return err
}
}
if time.Since(lastReport) > time.Second*5 {
lastReport = time.Now()
log.Infof("Rebuilding claim trie data to %d. At: %d", target, h)
}
}
log.Infof("Completed rebuilding claim trie data to %d. Took %s ",
b.claimTrie.Height(), time.Since(start))
return nil
}

View file

@ -9,108 +9,12 @@ import (
"testing"
"time"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
// TestHaveBlock tests the HaveBlock API to ensure proper functionality.
func TestHaveBlock(t *testing.T) {
// Load up blocks such that there is a side chain.
// (genesis block) -> 1 -> 2 -> 3 -> 4
// \-> 3a
testFiles := []string{
"blk_0_to_4.dat.bz2",
"blk_3A.dat.bz2",
}
var blocks []*btcutil.Block
for _, file := range testFiles {
blockTmp, err := loadBlocks(file)
if err != nil {
t.Errorf("Error loading file: %v\n", err)
return
}
blocks = append(blocks, blockTmp...)
}
// Create a new database and chain instance to run tests against.
chain, teardownFunc, err := chainSetup("haveblock",
&chaincfg.MainNetParams)
if err != nil {
t.Errorf("Failed to setup chain instance: %v", err)
return
}
defer teardownFunc()
// Since we're not dealing with the real block chain, set the coinbase
// maturity to 1.
chain.TstSetCoinbaseMaturity(1)
for i := 1; i < len(blocks); i++ {
_, isOrphan, err := chain.ProcessBlock(blocks[i], BFNone)
if err != nil {
t.Errorf("ProcessBlock fail on block %v: %v\n", i, err)
return
}
if isOrphan {
t.Errorf("ProcessBlock incorrectly returned block %v "+
"is an orphan\n", i)
return
}
}
// Insert an orphan block.
_, isOrphan, err := chain.ProcessBlock(btcutil.NewBlock(&Block100000),
BFNone)
if err != nil {
t.Errorf("Unable to process block: %v", err)
return
}
if !isOrphan {
t.Errorf("ProcessBlock indicated block is an not orphan when " +
"it should be\n")
return
}
tests := []struct {
hash string
want bool
}{
// Genesis block should be present (in the main chain).
{hash: chaincfg.MainNetParams.GenesisHash.String(), want: true},
// Block 3a should be present (on a side chain).
{hash: "00000000474284d20067a4d33f6a02284e6ef70764a3a26d6a5b9df52ef663dd", want: true},
// Block 100000 should be present (as an orphan).
{hash: "000000000003ba27aa200b1cecaad478d2b00432346c3f1f3986da1afd33e506", want: true},
// Random hashes should not be available.
{hash: "123", want: false},
}
for i, test := range tests {
hash, err := chainhash.NewHashFromStr(test.hash)
if err != nil {
t.Errorf("NewHashFromStr: %v", err)
continue
}
result, err := chain.HaveBlock(hash)
if err != nil {
t.Errorf("HaveBlock #%d unexpected error: %v", i, err)
return
}
if result != test.want {
t.Errorf("HaveBlock #%d got %v want %v", i, result,
test.want)
continue
}
}
}
// TestCalcSequenceLock tests the LockTimeToSequence function, and the
// CalcSequenceLock method of a Chain instance. The tests exercise several
// combinations of inputs to the CalcSequenceLock function in order to ensure

View file

@ -12,10 +12,10 @@ import (
"sync"
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/database"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
const (
@ -120,7 +120,7 @@ func dbFetchVersion(dbTx database.Tx, key []byte) uint32 {
return 0
}
return byteOrder.Uint32(serialized[:])
return byteOrder.Uint32(serialized)
}
// dbPutVersion uses an existing database transaction to update the provided
@ -943,7 +943,7 @@ func serializeBestChainState(state bestChainState) []byte {
byteOrder.PutUint32(serializedData[offset:], workSumBytesLen)
offset += 4
copy(serializedData[offset:], workSumBytes)
return serializedData[:]
return serializedData
}
// deserializeBestChainState deserializes the passed serialized best chain
@ -1149,18 +1149,9 @@ func (b *BlockChain) initChainState() error {
blockIndexBucket := dbTx.Metadata().Bucket(blockIndexBucketName)
// Determine how many blocks will be loaded into the index so we can
// allocate the right amount.
var blockCount int32
cursor := blockIndexBucket.Cursor()
for ok := cursor.First(); ok; ok = cursor.Next() {
blockCount++
}
blockNodes := make([]blockNode, blockCount)
var i int32
var lastNode *blockNode
cursor = blockIndexBucket.Cursor()
cursor := blockIndexBucket.Cursor()
for ok := cursor.First(); ok; ok = cursor.Next() {
header, status, err := deserializeBlockRow(cursor.Value())
if err != nil {
@ -1193,7 +1184,7 @@ func (b *BlockChain) initChainState() error {
// Initialize the block node for the block, connect it,
// and add it to the block index.
node := &blockNodes[i]
node := new(blockNode)
initBlockNode(node, header, parent)
node.status = status
b.index.addNode(node)

View file

@ -11,8 +11,8 @@ import (
"reflect"
"testing"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/database"
"github.com/lbryio/lbcd/wire"
)
// TestErrNotInMainChain ensures the functions related to errNotInMainChain work

123
blockchain/chainquery.go Normal file
View file

@ -0,0 +1,123 @@
package blockchain
import (
"sort"
"strings"
btcutil "github.com/lbryio/lbcutil"
)
type ChainTip struct { // duplicate of btcjson.GetChainTipsResult to avoid circular reference
Height int64
Hash string
BranchLen int64
Status string
}
// nodeHeightSorter implements sort.Interface to allow a slice of nodes to
// be sorted by height in ascending order.
type nodeHeightSorter []ChainTip
// Len returns the number of nodes in the slice. It is part of the
// sort.Interface implementation.
func (s nodeHeightSorter) Len() int {
return len(s)
}
// Swap swaps the nodes at the passed indices. It is part of the
// sort.Interface implementation.
func (s nodeHeightSorter) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
// Less returns whether the node with index i should sort before the node with
// index j. It is part of the sort.Interface implementation.
func (s nodeHeightSorter) Less(i, j int) bool {
// To ensure stable order when the heights are the same, fall back to
// sorting based on hash.
if s[i].Height == s[j].Height {
return strings.Compare(s[i].Hash, s[j].Hash) < 0
}
return s[i].Height < s[j].Height
}
// ChainTips returns information, in JSON-RPC format, about all the currently
// known chain tips in the block index.
func (b *BlockChain) ChainTips() []ChainTip {
// we need our current tip
// we also need all of our orphans that aren't in the prevOrphans
var results []ChainTip
tip := b.bestChain.Tip()
results = append(results, ChainTip{
Height: int64(tip.height),
Hash: tip.hash.String(),
BranchLen: 0,
Status: "active",
})
b.orphanLock.RLock()
defer b.orphanLock.RUnlock()
notInBestChain := func(block *btcutil.Block) bool {
node := b.bestChain.NodeByHeight(block.Height())
if node == nil {
return false
}
return node.hash.IsEqual(block.Hash())
}
for hash, orphan := range b.orphans {
if len(b.prevOrphans[hash]) > 0 {
continue
}
fork := orphan.block
for fork != nil && notInBestChain(fork) {
fork = b.orphans[*fork.Hash()].block
}
result := ChainTip{
Height: int64(orphan.block.Height()),
Hash: hash.String(),
BranchLen: int64(orphan.block.Height() - fork.Height()),
}
// Determine the status of the chain tip.
//
// active:
// The current best chain tip.
//
// invalid:
// The block or one of its ancestors is invalid.
//
// headers-only:
// The block or one of its ancestors does not have the full block data
// available which also means the block can't be validated or
// connected.
//
// valid-fork:
// The block is fully validated which implies it was probably part of
// main chain at one point and was reorganized.
//
// valid-headers:
// The full block data is available and the header is valid, but the
// block was never validated which implies it was probably never part
// of the main chain.
tipStatus := b.index.LookupNode(&hash).status
if tipStatus.KnownInvalid() {
result.Status = "invalid"
} else if !tipStatus.HaveData() {
result.Status = "headers-only"
} else if tipStatus.KnownValid() {
result.Status = "valid-fork"
} else {
result.Status = "valid-headers"
}
results = append(results, result)
}
// Generate the results sorted by descending height.
sort.Sort(sort.Reverse(nodeHeightSorter(results)))
return results
}

View file

@ -36,11 +36,13 @@ func fastLog2Floor(n uint32) uint8 {
// for comparing chains.
//
// For example, assume a block chain with a side chain as depicted below:
// genesis -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8
// \-> 4a -> 5a -> 6a
//
// genesis -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8
// \-> 4a -> 5a -> 6a
//
// The chain view for the branch ending in 6a consists of:
// genesis -> 1 -> 2 -> 3 -> 4a -> 5a -> 6a
//
// genesis -> 1 -> 2 -> 3 -> 4a -> 5a -> 6a
type chainView struct {
mtx sync.Mutex
nodes []*blockNode
@ -258,12 +260,14 @@ func (c *chainView) next(node *blockNode) *blockNode {
// view.
//
// For example, assume a block chain with a side chain as depicted below:
// genesis -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8
// \-> 4a -> 5a -> 6a
//
// genesis -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8
// \-> 4a -> 5a -> 6a
//
// Further, assume the view is for the longer chain depicted above. That is to
// say it consists of:
// genesis -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8
//
// genesis -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8
//
// Invoking this function with block node 5 would return block node 6 while
// invoking it with block node 5a would return nil since that node is not part
@ -321,12 +325,14 @@ func (c *chainView) findFork(node *blockNode) *blockNode {
// the chain view. It will return nil if there is no common block.
//
// For example, assume a block chain with a side chain as depicted below:
// genesis -> 1 -> 2 -> ... -> 5 -> 6 -> 7 -> 8
// \-> 6a -> 7a
//
// genesis -> 1 -> 2 -> ... -> 5 -> 6 -> 7 -> 8
// \-> 6a -> 7a
//
// Further, assume the view is for the longer chain depicted above. That is to
// say it consists of:
// genesis -> 1 -> 2 -> ... -> 5 -> 6 -> 7 -> 8.
//
// genesis -> 1 -> 2 -> ... -> 5 -> 6 -> 7 -> 8.
//
// Invoking this function with block node 7a would return block node 5 while
// invoking it with block node 7 would return itself since it is already part of

View file

@ -10,7 +10,7 @@ import (
"reflect"
"testing"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/wire"
)
// testNoncePrng provides a deterministic prng for the nonce in generated fake

View file

@ -8,10 +8,10 @@ import (
"fmt"
"time"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/txscript"
btcutil "github.com/lbryio/lbcutil"
)
// CheckpointConfirmations is the number of blocks before the end of the current
@ -172,7 +172,8 @@ func (b *BlockChain) findPreviousCheckpoint() (*blockNode, error) {
func isNonstandardTransaction(tx *btcutil.Tx) bool {
// Check all of the output public key scripts for non-standard scripts.
for _, txOut := range tx.MsgTx().TxOut {
scriptClass := txscript.GetScriptClass(txOut.PkScript)
stripped := txscript.StripClaimScriptPrefix(txOut.PkScript)
scriptClass := txscript.GetScriptClass(stripped)
if scriptClass == txscript.NonStandardTy {
return true
}
@ -184,14 +185,14 @@ func isNonstandardTransaction(tx *btcutil.Tx) bool {
// checkpoint candidate.
//
// The factors used to determine a good checkpoint are:
// - The block must be in the main chain
// - The block must be at least 'CheckpointConfirmations' blocks prior to the
// current end of the main chain
// - The timestamps for the blocks before and after the checkpoint must have
// timestamps which are also before and after the checkpoint, respectively
// (due to the median time allowance this is not always the case)
// - The block must not contain any strange transaction such as those with
// nonstandard scripts
// - The block must be in the main chain
// - The block must be at least 'CheckpointConfirmations' blocks prior to the
// current end of the main chain
// - The timestamps for the blocks before and after the checkpoint must have
// timestamps which are also before and after the checkpoint, respectively
// (due to the median time allowance this is not always the case)
// - The block must not contain any strange transaction such as those with
// nonstandard scripts
//
// The intent is that candidates are reviewed by a developer to make the final
// decision and then manually added to the list of checkpoints for a network.

183
blockchain/claimtrie.go Normal file
View file

@ -0,0 +1,183 @@
package blockchain
import (
"bytes"
"fmt"
"github.com/pkg/errors"
"github.com/lbryio/lbcd/txscript"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
"github.com/lbryio/lbcd/claimtrie"
"github.com/lbryio/lbcd/claimtrie/change"
"github.com/lbryio/lbcd/claimtrie/node"
"github.com/lbryio/lbcd/claimtrie/normalization"
)
func (b *BlockChain) SetClaimtrieHeader(block *btcutil.Block, view *UtxoViewpoint) error {
b.chainLock.Lock()
defer b.chainLock.Unlock()
err := b.ParseClaimScripts(block, nil, view, false)
if err != nil {
return errors.Wrapf(err, "in parse claim scripts")
}
block.MsgBlock().Header.ClaimTrie = *b.claimTrie.MerkleHash()
err = b.claimTrie.ResetHeight(b.claimTrie.Height() - 1)
return errors.Wrapf(err, "in reset height")
}
func (b *BlockChain) ParseClaimScripts(block *btcutil.Block, bn *blockNode, view *UtxoViewpoint, shouldFlush bool) error {
ht := block.Height()
for _, tx := range block.Transactions() {
h := handler{ht, tx, view, map[string][]byte{}}
if err := h.handleTxIns(b.claimTrie); err != nil {
return err
}
if err := h.handleTxOuts(b.claimTrie); err != nil {
return err
}
}
err := b.claimTrie.AppendBlock(bn == nil)
if err != nil {
return errors.Wrapf(err, "in append block")
}
if shouldFlush {
b.claimTrie.FlushToDisk()
}
hash := b.claimTrie.MerkleHash()
if bn != nil && bn.claimTrie != *hash {
// undo our AppendBlock call as we've decided that our interpretation of the block data is incorrect,
// or that the person who made the block assembled the pieces incorrectly.
_ = b.claimTrie.ResetHeight(b.claimTrie.Height() - 1)
return errors.Errorf("height: %d, computed hash: %s != header's ClaimTrie: %s", ht, *hash, bn.claimTrie)
}
return nil
}
type handler struct {
ht int32
tx *btcutil.Tx
view *UtxoViewpoint
spent map[string][]byte
}
func (h *handler) handleTxIns(ct *claimtrie.ClaimTrie) error {
if IsCoinBase(h.tx) {
return nil
}
for _, txIn := range h.tx.MsgTx().TxIn {
op := txIn.PreviousOutPoint
e := h.view.LookupEntry(op)
if e == nil {
return errors.Errorf("missing input in view for %s", op.String())
}
cs, err := txscript.ExtractClaimScript(e.pkScript)
if txscript.IsErrorCode(err, txscript.ErrNotClaimScript) {
continue
}
if err != nil {
return err
}
var id change.ClaimID
name := cs.Name // name of the previous one (that we're now spending)
switch cs.Opcode {
case txscript.OP_CLAIMNAME: // OP code from previous transaction
id = change.NewClaimID(op) // claimID of the previous item now being spent
h.spent[id.Key()] = normalization.NormalizeIfNecessary(name, ct.Height())
err = ct.SpendClaim(name, op, id)
case txscript.OP_UPDATECLAIM:
copy(id[:], cs.ClaimID)
h.spent[id.Key()] = normalization.NormalizeIfNecessary(name, ct.Height())
err = ct.SpendClaim(name, op, id)
case txscript.OP_SUPPORTCLAIM:
copy(id[:], cs.ClaimID)
err = ct.SpendSupport(name, op, id)
}
if err != nil {
return errors.Wrapf(err, "handleTxIns")
}
}
return nil
}
func (h *handler) handleTxOuts(ct *claimtrie.ClaimTrie) error {
for i, txOut := range h.tx.MsgTx().TxOut {
op := *wire.NewOutPoint(h.tx.Hash(), uint32(i))
cs, err := txscript.ExtractClaimScript(txOut.PkScript)
if txscript.IsErrorCode(err, txscript.ErrNotClaimScript) {
continue
}
if err != nil {
return err
}
var id change.ClaimID
name := cs.Name
amt := txOut.Value
switch cs.Opcode {
case txscript.OP_CLAIMNAME:
id = change.NewClaimID(op)
err = ct.AddClaim(name, op, id, amt)
case txscript.OP_SUPPORTCLAIM:
copy(id[:], cs.ClaimID)
err = ct.AddSupport(name, op, amt, id)
case txscript.OP_UPDATECLAIM:
// old code wouldn't run the update if name or claimID didn't match existing data
// that was a safety feature, but it should have rejected the transaction instead
// TODO: reject transactions with invalid update commands
copy(id[:], cs.ClaimID)
normName := normalization.NormalizeIfNecessary(name, ct.Height())
if !bytes.Equal(h.spent[id.Key()], normName) {
node.LogOnce(fmt.Sprintf("Invalid update operation: name or ID mismatch at %d for: %s, %s",
ct.Height(), normName, id.String()))
continue
}
delete(h.spent, id.Key())
err = ct.UpdateClaim(name, op, amt, id)
}
if err != nil {
return errors.Wrapf(err, "handleTxOuts")
}
}
return nil
}
func (b *BlockChain) GetNamesChangedInBlock(height int32) ([]string, error) {
b.chainLock.RLock()
defer b.chainLock.RUnlock()
return b.claimTrie.NamesChangedInBlock(height)
}
func (b *BlockChain) GetClaimsForName(height int32, name string) (string, *node.Node, error) {
normalizedName := normalization.NormalizeIfNecessary([]byte(name), height)
b.chainLock.RLock()
defer b.chainLock.RUnlock()
n, err := b.claimTrie.NodeAt(height, normalizedName)
if err != nil {
return string(normalizedName), nil, err
}
if n == nil {
return string(normalizedName), nil, fmt.Errorf("name does not exist at height %d: %s", height, name)
}
n.SortClaimsByBid()
return string(normalizedName), n, nil
}

View file

@ -5,6 +5,7 @@
package blockchain
import (
"bytes"
"compress/bzip2"
"encoding/binary"
"fmt"
@ -14,13 +15,13 @@ import (
"strings"
"time"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
_ "github.com/btcsuite/btcd/database/ffldb"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/database"
_ "github.com/lbryio/lbcd/database/ffldb"
"github.com/lbryio/lbcd/txscript"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
const (
@ -63,13 +64,13 @@ func isSupportedDbType(dbType string) bool {
func loadBlocks(filename string) (blocks []*btcutil.Block, err error) {
filename = filepath.Join("testdata/", filename)
var network = wire.MainNet
var network = 0xd9b4bef9 // bitcoin's network ID
var dr io.Reader
var fi io.ReadCloser
fi, err = os.Open(filename)
if err != nil {
return
return blocks, err
}
if strings.HasSuffix(filename, ".bz2") {
@ -95,7 +96,7 @@ func loadBlocks(filename string) (blocks []*btcutil.Block, err error) {
break
}
if rintbuf != uint32(network) {
break
continue
}
err = binary.Read(dr, binary.LittleEndian, &rintbuf)
blocklen := rintbuf
@ -105,14 +106,20 @@ func loadBlocks(filename string) (blocks []*btcutil.Block, err error) {
// read block
dr.Read(rbytes)
// inject claimtrie:
tail := make([]byte, len(rbytes)-68)
copy(tail, rbytes[68:])
rbytes = append(rbytes[:68], bytes.Repeat([]byte{23}, chainhash.HashSize)...)
rbytes = append(rbytes, tail...)
block, err = btcutil.NewBlockFromBytes(rbytes)
if err != nil {
return
return blocks, err
}
blocks = append(blocks, block)
}
return
return blocks, err
}
// chainSetup is used to create a new db and chain instance with the genesis

View file

@ -5,8 +5,8 @@
package blockchain
import (
"github.com/btcsuite/btcd/btcec"
"github.com/btcsuite/btcd/txscript"
"github.com/lbryio/lbcd/btcec"
"github.com/lbryio/lbcd/txscript"
)
// -----------------------------------------------------------------------------

View file

@ -8,7 +8,7 @@ import (
"math/big"
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/chaincfg/chainhash"
)
var (
@ -42,18 +42,21 @@ func HashToBig(hash *chainhash.Hash) *big.Int {
// Like IEEE754 floating point, there are three basic components: the sign,
// the exponent, and the mantissa. They are broken out as follows:
//
// * the most significant 8 bits represent the unsigned base 256 exponent
// * bit 23 (the 24th bit) represents the sign bit
// * the least significant 23 bits represent the mantissa
// - the most significant 8 bits represent the unsigned base 256 exponent
//
// -------------------------------------------------
// | Exponent | Sign | Mantissa |
// -------------------------------------------------
// | 8 bits [31-24] | 1 bit [23] | 23 bits [22-00] |
// -------------------------------------------------
// - bit 23 (the 24th bit) represents the sign bit
//
// - the least significant 23 bits represent the mantissa
//
// -------------------------------------------------
// | Exponent | Sign | Mantissa |
// -------------------------------------------------
// | 8 bits [31-24] | 1 bit [23] | 23 bits [22-00] |
// -------------------------------------------------
//
// The formula to calculate N is:
// N = (-1^sign) * mantissa * 256^(exponent-3)
//
// N = (-1^sign) * mantissa * 256^(exponent-3)
//
// This compact form is only used in bitcoin to encode unsigned 256-bit numbers
// which represent difficulty targets, thus there really is not a need for a
@ -159,7 +162,6 @@ func CalcWork(bits uint32) *big.Int {
func (b *BlockChain) calcEasiestDifficulty(bits uint32, duration time.Duration) uint32 {
// Convert types used in the calculations below.
durationVal := int64(duration / time.Second)
adjustmentFactor := big.NewInt(b.chainParams.RetargetAdjustmentFactor)
// The test network rules allow minimum difficulty blocks after more
// than twice the desired amount of time needed to generate a block has
@ -178,7 +180,8 @@ func (b *BlockChain) calcEasiestDifficulty(bits uint32, duration time.Duration)
// multiplied by the max adjustment factor.
newTarget := CompactToBig(bits)
for durationVal > 0 && newTarget.Cmp(b.chainParams.PowLimit) < 0 {
newTarget.Mul(newTarget, adjustmentFactor)
adj := new(big.Int).Div(newTarget, big.NewInt(2))
newTarget.Add(newTarget, adj)
durationVal -= b.maxRetargetTimespan
}
@ -224,47 +227,45 @@ func (b *BlockChain) calcNextRequiredDifficulty(lastNode *blockNode, newBlockTim
return b.chainParams.PowLimitBits, nil
}
// Return the previous block's difficulty requirements if this block
// is not at a difficulty retarget interval.
if (lastNode.height+1)%b.blocksPerRetarget != 0 {
// For networks that support it, allow special reduction of the
// required difficulty once too much time has elapsed without
// mining a block.
if b.chainParams.ReduceMinDifficulty {
// Return minimum difficulty when more than the desired
// amount of time has elapsed without mining a block.
reductionTime := int64(b.chainParams.MinDiffReductionTime /
time.Second)
allowMinTime := lastNode.timestamp + reductionTime
if newBlockTime.Unix() > allowMinTime {
return b.chainParams.PowLimitBits, nil
}
// The block was mined within the desired timeframe, so
// return the difficulty for the last block which did
// not have the special minimum difficulty rule applied.
return b.findPrevTestNetDifficulty(lastNode), nil
// For networks that support it, allow special reduction of the
// required difficulty once too much time has elapsed without
// mining a block.
if b.chainParams.ReduceMinDifficulty {
// Return minimum difficulty when more than the desired
// amount of time has elapsed without mining a block.
reductionTime := int64(b.chainParams.MinDiffReductionTime /
time.Second)
allowMinTime := lastNode.timestamp + reductionTime
if newBlockTime.Unix() > allowMinTime {
return b.chainParams.PowLimitBits, nil
}
// For the main network (or any unrecognized networks), simply
// return the previous block's difficulty requirements.
return lastNode.bits, nil
// The block was mined within the desired timeframe, so
// return the difficulty for the last block which did
// not have the special minimum difficulty rule applied.
return b.findPrevTestNetDifficulty(lastNode), nil
}
// Get the block node at the previous retarget (targetTimespan days
// worth of blocks).
firstNode := lastNode.RelativeAncestor(b.blocksPerRetarget - 1)
blocksBack := b.blocksPerRetarget
if blocksBack > lastNode.height {
blocksBack = lastNode.height
}
firstNode := lastNode.RelativeAncestor(blocksBack)
if firstNode == nil {
return 0, AssertError("unable to obtain previous retarget block")
}
targetTimeSpan := int64(b.chainParams.TargetTimespan / time.Second)
// Limit the amount of adjustment that can occur to the previous
// difficulty.
actualTimespan := lastNode.timestamp - firstNode.timestamp
adjustedTimespan := actualTimespan
if actualTimespan < b.minRetargetTimespan {
adjustedTimespan := targetTimeSpan + (actualTimespan-targetTimeSpan)/8
if adjustedTimespan < b.minRetargetTimespan {
adjustedTimespan = b.minRetargetTimespan
} else if actualTimespan > b.maxRetargetTimespan {
} else if adjustedTimespan > b.maxRetargetTimespan {
adjustedTimespan = b.maxRetargetTimespan
}
@ -275,7 +276,6 @@ func (b *BlockChain) calcNextRequiredDifficulty(lastNode *blockNode, newBlockTim
// result.
oldTarget := CompactToBig(lastNode.bits)
newTarget := new(big.Int).Mul(oldTarget, big.NewInt(adjustedTimespan))
targetTimeSpan := int64(b.chainParams.TargetTimespan / time.Second)
newTarget.Div(newTarget, big.NewInt(targetTimeSpan))
// Limit new value to the proof of work limit.

View file

@ -63,7 +63,7 @@ func TestCalcWork(t *testing.T) {
}
for x, test := range tests {
bits := uint32(test.in)
bits := test.in
r := CalcWork(bits)
if r.Int64() != test.out {

View file

@ -26,42 +26,42 @@ caller a high level of flexibility in how they want to react to certain events
such as orphan blocks which need their parents requested and newly connected
main chain blocks which might result in wallet updates.
Bitcoin Chain Processing Overview
# Bitcoin Chain Processing Overview
Before a block is allowed into the block chain, it must go through an intensive
series of validation rules. The following list serves as a general outline of
those rules to provide some intuition into what is going on under the hood, but
is by no means exhaustive:
- Reject duplicate blocks
- Perform a series of sanity checks on the block and its transactions such as
verifying proof of work, timestamps, number and character of transactions,
transaction amounts, script complexity, and merkle root calculations
- Compare the block against predetermined checkpoints for expected timestamps
and difficulty based on elapsed time since the checkpoint
- Save the most recent orphan blocks for a limited time in case their parent
blocks become available
- Stop processing if the block is an orphan as the rest of the processing
depends on the block's position within the block chain
- Perform a series of more thorough checks that depend on the block's position
within the block chain such as verifying block difficulties adhere to
difficulty retarget rules, timestamps are after the median of the last
several blocks, all transactions are finalized, checkpoint blocks match, and
block versions are in line with the previous blocks
- Determine how the block fits into the chain and perform different actions
accordingly in order to ensure any side chains which have higher difficulty
than the main chain become the new main chain
- When a block is being connected to the main chain (either through
reorganization of a side chain to the main chain or just extending the
main chain), perform further checks on the block's transactions such as
verifying transaction duplicates, script complexity for the combination of
connected scripts, coinbase maturity, double spends, and connected
transaction values
- Run the transaction scripts to verify the spender is allowed to spend the
coins
- Insert the block into the block database
- Reject duplicate blocks
- Perform a series of sanity checks on the block and its transactions such as
verifying proof of work, timestamps, number and character of transactions,
transaction amounts, script complexity, and merkle root calculations
- Compare the block against predetermined checkpoints for expected timestamps
and difficulty based on elapsed time since the checkpoint
- Save the most recent orphan blocks for a limited time in case their parent
blocks become available
- Stop processing if the block is an orphan as the rest of the processing
depends on the block's position within the block chain
- Perform a series of more thorough checks that depend on the block's position
within the block chain such as verifying block difficulties adhere to
difficulty retarget rules, timestamps are after the median of the last
several blocks, all transactions are finalized, checkpoint blocks match, and
block versions are in line with the previous blocks
- Determine how the block fits into the chain and perform different actions
accordingly in order to ensure any side chains which have higher difficulty
than the main chain become the new main chain
- When a block is being connected to the main chain (either through
reorganization of a side chain to the main chain or just extending the
main chain), perform further checks on the block's transactions such as
verifying transaction duplicates, script complexity for the combination of
connected scripts, coinbase maturity, double spends, and connected
transaction values
- Run the transaction scripts to verify the spender is allowed to spend the
coins
- Insert the block into the block database
Errors
# Errors
Errors returned by this package are either the raw errors provided by underlying
calls or of type blockchain.RuleError. This allows the caller to differentiate
@ -70,12 +70,12 @@ violations through type assertions. In addition, callers can programmatically
determine the specific rule violation by examining the ErrorCode field of the
type asserted blockchain.RuleError.
Bitcoin Improvement Proposals
# Bitcoin Improvement Proposals
This package includes spec changes outlined by the following BIPs:
BIP0016 (https://en.bitcoin.it/wiki/BIP_0016)
BIP0030 (https://en.bitcoin.it/wiki/BIP_0030)
BIP0034 (https://en.bitcoin.it/wiki/BIP_0034)
BIP0016 (https://en.bitcoin.it/wiki/BIP_0016)
BIP0030 (https://en.bitcoin.it/wiki/BIP_0030)
BIP0034 (https://en.bitcoin.it/wiki/BIP_0034)
*/
package blockchain

View file

@ -220,6 +220,10 @@ const (
// current chain tip. This is not a block validation rule, but is required
// for block proposals submitted via getblocktemplate RPC.
ErrPrevBlockNotBest
// ErrBadClaimTrie indicates the calculated ClaimTrie root does not match
// the expected value.
ErrBadClaimTrie
)
// Map of ErrorCode values back to their constant names for pretty printing.
@ -267,6 +271,7 @@ var errorCodeStrings = map[ErrorCode]string{
ErrPreviousBlockUnknown: "ErrPreviousBlockUnknown",
ErrInvalidAncestorBlock: "ErrInvalidAncestorBlock",
ErrPrevBlockNotBest: "ErrPrevBlockNotBest",
ErrBadClaimTrie: "ErrBadClaimTrie",
}
// String returns the ErrorCode as a human-readable name.

View file

@ -10,11 +10,11 @@ import (
"os"
"path/filepath"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/database"
_ "github.com/btcsuite/btcd/database/ffldb"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/blockchain"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/database"
_ "github.com/lbryio/lbcd/database/ffldb"
btcutil "github.com/lbryio/lbcutil"
)
// This example demonstrates how to create a new chain instance and use
@ -69,7 +69,7 @@ func ExampleBlockChain_ProcessBlock() {
fmt.Printf("Block accepted. Is it an orphan?: %v", isOrphan)
// Output:
// Failed to process block: already have block 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f
// Failed to process block: already have block 9c89283ba0f3227f6c03b70216b9f665f0118d5e0fa729cedf4fb34d6a34f463
}
// This example demonstrates how to convert the compact "bits" in a block header

View file

@ -12,15 +12,15 @@ import (
"path/filepath"
"testing"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/blockchain/fullblocktests"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
_ "github.com/btcsuite/btcd/database/ffldb"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/blockchain"
"github.com/lbryio/lbcd/blockchain/fullblocktests"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/database"
_ "github.com/lbryio/lbcd/database/ffldb"
"github.com/lbryio/lbcd/txscript"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
const (
@ -139,7 +139,7 @@ func TestFullBlocks(t *testing.T) {
// Create a new database and chain instance to run tests against.
chain, teardownFunc, err := chainSetup("fullblocktest",
&chaincfg.RegressionNetParams)
fullblocktests.FbRegressionNetParams)
if err != nil {
t.Errorf("Failed to setup chain instance: %v", err)
return

View file

@ -1,9 +1,9 @@
fullblocktests
==============
[![Build Status](http://img.shields.io/travis/btcsuite/btcd.svg)](https://travis-ci.org/btcsuite/btcd)
[![Build Status](https://github.com/lbryio/lbcd/workflows/Build%20and%20Test/badge.svg)](https://github.com/lbryio/lbcd/actions)
[![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](http://godoc.org/github.com/btcsuite/btcd/blockchain/fullblocktests)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://pkg.go.dev/github.com/lbryio/lbcd/blockchain/fullblocktests)
Package fullblocktests provides a set of full block tests to be used for testing
the consensus validation rules. The tests are intended to be flexible enough to
@ -20,7 +20,7 @@ of blocks that exercise the consensus validation rules.
## Installation and Updating
```bash
$ go get -u github.com/btcsuite/btcd/blockchain/fullblocktests
$ go get -u github.com/lbryio/lbcd/blockchain/fullblocktests
```
## License

View file

@ -18,24 +18,24 @@ import (
"runtime"
"time"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/btcec"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/blockchain"
"github.com/lbryio/lbcd/btcec"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/txscript"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
const (
// Intentionally defined here rather than using constants from codebase
// to ensure consensus changes are detected.
maxBlockSigOps = 20000
maxBlockSize = 1000000
maxBlockSize = 8000000
minCoinbaseScriptLen = 2
maxCoinbaseScriptLen = 100
medianTimeBlocks = 11
maxScriptElementSize = 520
maxScriptElementSize = 20000
// numLargeReorgBlocks is the number of blocks to use in the large block
// reorg test (when enabled). This is the equivalent of 1 week's worth
@ -342,10 +342,8 @@ func solveBlock(header *wire.BlockHeader) bool {
return
default:
hdr.Nonce = i
hash := hdr.BlockHash()
if blockchain.HashToBig(&hash).Cmp(
targetDifficulty) <= 0 {
hash := hdr.BlockPoWHash()
if blockchain.HashToBig(&hash).Cmp(targetDifficulty) <= 0 {
results <- sbResult{true, i}
return
}
@ -466,9 +464,9 @@ func createSpendTxForTx(tx *wire.MsgTx, fee btcutil.Amount) *wire.MsgTx {
// - A coinbase that pays the required subsidy to an OP_TRUE script
// - When a spendable output is provided:
// - A transaction that spends from the provided output the following outputs:
// - One that pays the inputs amount minus 1 atom to an OP_TRUE script
// - One that contains an OP_RETURN output with a random uint64 in order to
// ensure the transaction has a unique hash
// - One that pays the inputs amount minus 1 atom to an OP_TRUE script
// - One that contains an OP_RETURN output with a random uint64 in order to
// ensure the transaction has a unique hash
//
// Additionally, if one or more munge functions are specified, they will be
// invoked with the block prior to solving it. This provides callers with the
@ -811,7 +809,7 @@ func Generate(includeLargeReorg bool) (tests [][]TestInstance, err error) {
// Create a test generator instance initialized with the genesis block
// as the tip.
g, err := makeTestGenerator(regressionNetParams)
g, err := makeTestGenerator(FbRegressionNetParams)
if err != nil {
return nil, err
}
@ -1444,7 +1442,7 @@ func Generate(includeLargeReorg bool) (tests [][]TestInstance, err error) {
// Keep incrementing the nonce until the hash treated as
// a uint256 is higher than the limit.
b46.Header.Nonce++
blockHash := b46.BlockHash()
blockHash := b46.Header.BlockPoWHash()
hashNum := blockchain.HashToBig(&blockHash)
if hashNum.Cmp(g.params.PowLimit) >= 0 {
break
@ -1875,7 +1873,7 @@ func Generate(includeLargeReorg bool) (tests [][]TestInstance, err error) {
//
// Comment assumptions:
// maxBlockSigOps = 20000
// maxScriptElementSize = 520
// maxScriptElementSize = 20000
//
// [0-19999] : OP_CHECKSIG
// [20000] : OP_PUSHDATA4

View file

@ -9,9 +9,9 @@ import (
"math/big"
"time"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/wire"
)
// newHashFromStr converts the passed big-endian hex string into a
@ -54,6 +54,7 @@ var (
Version: 1,
PrevBlock: *newHashFromStr("0000000000000000000000000000000000000000000000000000000000000000"),
MerkleRoot: *newHashFromStr("4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b"),
ClaimTrie: chainhash.Hash{1}, // EmptyTrieHash
Timestamp: time.Unix(1296688602, 0), // 2011-02-02 23:16:42 +0000 UTC
Bits: 0x207fffff, // 545259519 [7fffff0000000000000000000000000000000000000000000000000000000000]
Nonce: 2,
@ -83,23 +84,25 @@ var (
LockTime: 0,
}},
}
regTestGenesisBlockHash = regTestGenesisBlock.BlockHash()
)
// regressionNetParams defines the network parameters for the regression test
// FbRegressionNetParams defines the network parameters for the regression test
// network.
//
// NOTE: The test generator intentionally does not use the existing definitions
// in the chaincfg package since the intent is to be able to generate known
// good tests which exercise that code. Using the chaincfg parameters would
// allow them to change out from under the tests potentially invalidating them.
var regressionNetParams = &chaincfg.Params{
var FbRegressionNetParams = &chaincfg.Params{
Name: "regtest",
Net: wire.TestNet,
DefaultPort: "18444",
// Chain parameters
GenesisBlock: &regTestGenesisBlock,
GenesisHash: newHashFromStr("5bec7567af40504e0994db3b573c186fffcc4edefe096ff2e58d00523bd7e8a6"),
GenesisHash: &regTestGenesisBlockHash,
PowLimit: regressionPowLimit,
PowLimitBits: 0x207fffff,
CoinbaseMaturity: 100,
@ -113,6 +116,7 @@ var regressionNetParams = &chaincfg.Params{
ReduceMinDifficulty: true,
MinDiffReductionTime: time.Minute * 20, // TargetTimePerBlock * 2
GenerateSupported: true,
MinerConfirmationWindow: 1,
// Checkpoints ordered from oldest to newest.
Checkpoints: nil,

View file

@ -1,9 +1,9 @@
indexers
========
[![Build Status](https://travis-ci.org/btcsuite/btcd.png?branch=master)](https://travis-ci.org/btcsuite/btcd)
[![Build Status](https://github.com/lbryio/lbcd/workflows/Build%20and%20Test/badge.svg)](https://github.com/lbryio/lbcd/actions)
[![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://godoc.org/github.com/btcsuite/btcd/blockchain/indexers?status.png)](http://godoc.org/github.com/btcsuite/btcd/blockchain/indexers)
[![GoDoc](https://pkg.go.dev/github.com/lbryio/lbcd/blockchain/indexers?status.png)](https://pkg.go.dev/github.com/lbryio/lbcd/blockchain/indexers)
Package indexers implements optional block chain indexes.
@ -23,7 +23,7 @@ via an RPC interface.
## Installation
```bash
$ go get -u github.com/btcsuite/btcd/blockchain/indexers
$ go get -u github.com/lbryio/lbcd/blockchain/indexers
```
## License

View file

@ -9,13 +9,13 @@ import (
"fmt"
"sync"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/blockchain"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/database"
"github.com/lbryio/lbcd/txscript"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
const (
@ -155,7 +155,9 @@ func serializeAddrIndexEntry(blockID uint32, txLoc wire.TxLoc) []byte {
// provided region struct according to the format described in detail above and
// uses the passed block hash fetching function in order to conver the block ID
// to the associated block hash.
func deserializeAddrIndexEntry(serialized []byte, region *database.BlockRegion, fetchBlockHash fetchBlockHashFunc) error {
func deserializeAddrIndexEntry(serialized []byte, region *database.BlockRegion,
fetchBlockHash fetchBlockHashFunc) error {
// Ensure there are enough bytes to decode.
if len(serialized) < txEntrySize {
return errDeserialize("unexpected end of data")
@ -182,7 +184,9 @@ func keyForLevel(addrKey [addrKeySize]byte, level uint8) [levelKeySize]byte {
// dbPutAddrIndexEntry updates the address index to include the provided entry
// according to the level-based scheme described in detail above.
func dbPutAddrIndexEntry(bucket internalBucket, addrKey [addrKeySize]byte, blockID uint32, txLoc wire.TxLoc) error {
func dbPutAddrIndexEntry(bucket internalBucket, addrKey [addrKeySize]byte,
blockID uint32, txLoc wire.TxLoc) error {
// Start with level 0 and its initial max number of entries.
curLevel := uint8(0)
maxLevelBytes := level0MaxEntries * txEntrySize
@ -253,7 +257,10 @@ func dbPutAddrIndexEntry(bucket internalBucket, addrKey [addrKeySize]byte, block
// the given address key and the number of entries skipped since it could have
// been less in the case where there are less total entries than the requested
// number of entries to skip.
func dbFetchAddrIndexEntries(bucket internalBucket, addrKey [addrKeySize]byte, numToSkip, numRequested uint32, reverse bool, fetchBlockHash fetchBlockHashFunc) ([]database.BlockRegion, uint32, error) {
func dbFetchAddrIndexEntries(bucket internalBucket, addrKey [addrKeySize]byte,
numToSkip, numRequested uint32, reverse bool,
fetchBlockHash fetchBlockHashFunc) ([]database.BlockRegion, uint32, error) {
// When the reverse flag is not set, all levels need to be fetched
// because numToSkip and numRequested are counted from the oldest
// transactions (highest level) and thus the total count is needed.
@ -356,7 +363,9 @@ func maxEntriesForLevel(level uint8) int {
// dbRemoveAddrIndexEntries removes the specified number of entries from from
// the address index for the provided key. An assertion error will be returned
// if the count exceeds the total number of entries in the index.
func dbRemoveAddrIndexEntries(bucket internalBucket, addrKey [addrKeySize]byte, count int) error {
func dbRemoveAddrIndexEntries(bucket internalBucket, addrKey [addrKeySize]byte,
count int) error {
// Nothing to do if no entries are being deleted.
if count <= 0 {
return nil
@ -796,7 +805,9 @@ func (idx *AddrIndex) DisconnectBlock(dbTx database.Tx, block *btcutil.Block,
// that involve a given address.
//
// This function is safe for concurrent access.
func (idx *AddrIndex) TxRegionsForAddress(dbTx database.Tx, addr btcutil.Address, numToSkip, numRequested uint32, reverse bool) ([]database.BlockRegion, uint32, error) {
func (idx *AddrIndex) TxRegionsForAddress(dbTx database.Tx, addr btcutil.Address,
numToSkip, numRequested uint32, reverse bool) ([]database.BlockRegion, uint32, error) {
addrKey, err := addrToKey(addr)
if err != nil {
return nil, 0, err

View file

@ -9,7 +9,7 @@ import (
"fmt"
"testing"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/wire"
)
// addrIndexBucket provides a mock address index database bucket by implementing
@ -68,7 +68,7 @@ func (b *addrIndexBucket) printLevels(addrKey [addrKeySize]byte) string {
if !bytes.Equal(k[:levelOffset], addrKey[:]) {
continue
}
level := uint8(k[levelOffset])
level := k[levelOffset]
if level > highestLevel {
highestLevel = level
}
@ -105,7 +105,7 @@ func (b *addrIndexBucket) sanityCheck(addrKey [addrKeySize]byte, expectedTotal i
if !bytes.Equal(k[:levelOffset], addrKey[:]) {
continue
}
level := uint8(k[levelOffset])
level := k[levelOffset]
if level > highestLevel {
highestLevel = level
}

View file

@ -9,7 +9,7 @@ import (
"time"
"github.com/btcsuite/btclog"
"github.com/btcsuite/btcutil"
btcutil "github.com/lbryio/lbcutil"
)
// blockProgressLogger provides periodic logging for other services in order
@ -27,8 +27,9 @@ type blockProgressLogger struct {
// newBlockProgressLogger returns a new block progress logger.
// The progress message is templated as follows:
// {progressAction} {numProcessed} {blocks|block} in the last {timePeriod}
// ({numTxs}, height {lastBlockHeight}, {lastBlockTimeStamp})
//
// {progressAction} {numProcessed} {blocks|block} in the last {timePeriod}
// ({numTxs}, height {lastBlockHeight}, {lastBlockTimeStamp})
func newBlockProgressLogger(progressMessage string, logger btclog.Logger) *blockProgressLogger {
return &blockProgressLogger{
lastBlockLogTime: time.Now(),

View file

@ -7,14 +7,14 @@ package indexers
import (
"errors"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/btcsuite/btcutil/gcs"
"github.com/btcsuite/btcutil/gcs/builder"
"github.com/lbryio/lbcd/blockchain"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/database"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
"github.com/lbryio/lbcutil/gcs"
"github.com/lbryio/lbcutil/gcs/builder"
)
const (

View file

@ -11,9 +11,9 @@ import (
"encoding/binary"
"errors"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/blockchain"
"github.com/lbryio/lbcd/database"
btcutil "github.com/lbryio/lbcutil"
)
var (

View file

@ -8,11 +8,11 @@ import (
"bytes"
"fmt"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/blockchain"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/database"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
var (

View file

@ -8,11 +8,11 @@ import (
"errors"
"fmt"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/blockchain"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/database"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
const (

View file

@ -183,7 +183,7 @@ func (m *medianTime) AddTimeSample(sourceID string, timeVal time.Time) {
// Warn if none of the time samples are close.
if !remoteHasCloseTime {
log.Warnf("Please check your date and time " +
"are correct! btcd will not work " +
"are correct! lbcd will not work " +
"properly with an invalid time")
}
}

View file

@ -9,9 +9,10 @@ import (
"fmt"
"math"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/txscript"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
const (
@ -86,7 +87,7 @@ func HashMerkleBranches(left *chainhash.Hash, right *chainhash.Hash) *chainhash.
//
// The above stored as a linear array is as follows:
//
// [h1 h2 h3 h4 h12 h34 root]
// [h1 h2 h3 h4 h12 h34 root]
//
// As the above shows, the merkle root is always the last element in the array.
//
@ -227,11 +228,25 @@ func ValidateWitnessCommitment(blk *btcutil.Block) error {
// coinbase transaction MUST have exactly one witness element within
// its witness data and that element must be exactly
// CoinbaseWitnessDataLen bytes.
//
// Some popular pool software, for example yiimp, uses pre-BIP0141
// coinbase struture. In this case, we don't just accept it, but also
// turn it into post-BIP0141 format.
if len(coinbaseTx.MsgTx().TxIn[0].Witness) == 0 {
log.Infof("pre-BIP0141 coinbase transaction detected. Height: %d", blk.Height())
var witnessNonce [CoinbaseWitnessDataLen]byte
coinbaseTx.MsgTx().TxIn[0].Witness = wire.TxWitness{witnessNonce[:]}
blk.MsgBlock().Transactions[0].TxIn[0].Witness = wire.TxWitness{witnessNonce[:]}
// Clear cached serialized block.
blk.SetBytes(nil)
}
coinbaseWitness := coinbaseTx.MsgTx().TxIn[0].Witness
if len(coinbaseWitness) != 1 {
str := fmt.Sprintf("the coinbase transaction has %d items in "+
"its witness stack when only one is allowed",
len(coinbaseWitness))
"its witness stack when only one is allowed. Height: %d",
len(coinbaseWitness), blk.Height())
return ruleError(ErrInvalidWitnessCommitment, str)
}
witnessNonce := coinbaseWitness[0]

View file

@ -6,16 +6,14 @@ package blockchain
import (
"testing"
"github.com/btcsuite/btcutil"
)
// TestMerkle tests the BuildMerkleTreeStore API.
func TestMerkle(t *testing.T) {
block := btcutil.NewBlock(&Block100000)
block := GetBlock100000()
merkles := BuildMerkleTreeStore(block.Transactions(), false)
calculatedMerkleRoot := merkles[len(merkles)-1]
wantMerkle := &Block100000.Header.MerkleRoot
wantMerkle := block.MsgBlock().Header.MerkleRoot
if !wantMerkle.IsEqual(calculatedMerkleRoot) {
t.Errorf("BuildMerkleTreeStore: merkle root mismatch - "+
"got %v, want %v", calculatedMerkleRoot, wantMerkle)

View file

@ -50,9 +50,9 @@ func (n NotificationType) String() string {
// Notification defines notification that is sent to the caller via the callback
// function provided during the call to New and consists of a notification type
// as well as associated data that depends on the type as follows:
// - NTBlockAccepted: *btcutil.Block
// - NTBlockConnected: *btcutil.Block
// - NTBlockDisconnected: *btcutil.Block
// - NTBlockAccepted: *btcutil.Block
// - NTBlockConnected: *btcutil.Block
// - NTBlockDisconnected: *btcutil.Block
type Notification struct {
Type NotificationType
Data interface{}

View file

@ -1,51 +0,0 @@
// Copyright (c) 2017 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"testing"
"github.com/btcsuite/btcd/chaincfg"
)
// TestNotifications ensures that notification callbacks are fired on events.
func TestNotifications(t *testing.T) {
blocks, err := loadBlocks("blk_0_to_4.dat.bz2")
if err != nil {
t.Fatalf("Error loading file: %v\n", err)
}
// Create a new database and chain instance to run tests against.
chain, teardownFunc, err := chainSetup("notifications",
&chaincfg.MainNetParams)
if err != nil {
t.Fatalf("Failed to setup chain instance: %v", err)
}
defer teardownFunc()
notificationCount := 0
callback := func(notification *Notification) {
if notification.Type == NTBlockAccepted {
notificationCount++
}
}
// Register callback multiple times then assert it is called that many
// times.
const numSubscribers = 3
for i := 0; i < numSubscribers; i++ {
chain.Subscribe(callback)
}
_, _, err = chain.ProcessBlock(blocks[1], BFNone)
if err != nil {
t.Fatalf("ProcessBlock fail on block 1: %v\n", err)
}
if notificationCount != numSubscribers {
t.Fatalf("Expected notification callback to be executed %d "+
"times, found %d", numSubscribers, notificationCount)
}
}

View file

@ -8,9 +8,9 @@ import (
"fmt"
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/database"
btcutil "github.com/lbryio/lbcutil"
)
// BehaviorFlags is a bitmask defining tweaks to the normal behavior when
@ -29,6 +29,10 @@ const (
// not be performed.
BFNoPoWCheck
// BFNoDupBlockCheck signals if the block should skip existence
// checks.
BFNoDupBlockCheck
// BFNone is a convenience value to specifically indicate no flags.
BFNone BehaviorFlags = 0
)
@ -148,24 +152,26 @@ func (b *BlockChain) ProcessBlock(block *btcutil.Block, flags BehaviorFlags) (bo
blockHash := block.Hash()
log.Tracef("Processing block %v", blockHash)
// The block must not already exist in the main chain or side chains.
exists, err := b.blockExists(blockHash)
if err != nil {
return false, false, err
}
if exists {
str := fmt.Sprintf("already have block %v", blockHash)
return false, false, ruleError(ErrDuplicateBlock, str)
}
if flags&BFNoDupBlockCheck != BFNoDupBlockCheck {
// The block must not already exist in the main chain or side chains.
exists, err := b.blockExists(blockHash)
if err != nil {
return false, false, err
}
if exists {
str := fmt.Sprintf("already have block %v", blockHash)
return false, false, ruleError(ErrDuplicateBlock, str)
}
// The block must not already exist as an orphan.
if _, exists := b.orphans[*blockHash]; exists {
str := fmt.Sprintf("already have block (orphan) %v", blockHash)
return false, false, ruleError(ErrDuplicateBlock, str)
// The block must not already exist as an orphan.
if _, exists := b.orphans[*blockHash]; exists {
str := fmt.Sprintf("already have block (orphan) %v", blockHash)
return false, false, ruleError(ErrDuplicateBlock, str)
}
}
// Perform preliminary sanity checks on the block and its transactions.
err = checkBlockSanity(block, b.chainParams.PowLimit, b.timeSource, flags)
err := checkBlockSanity(block, b.chainParams.PowLimit, b.timeSource, flags)
if err != nil {
return false, false, err
}

View file

@ -10,9 +10,9 @@ import (
"runtime"
"time"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/txscript"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
// txValidateItem holds a transaction along with which input to validate.

View file

@ -6,17 +6,14 @@ package blockchain
import (
"fmt"
"runtime"
"testing"
"github.com/btcsuite/btcd/txscript"
"github.com/lbryio/lbcd/txscript"
)
// TestCheckBlockScripts ensures that validating the all of the scripts in a
// known-good block doesn't return an error.
func TestCheckBlockScripts(t *testing.T) {
runtime.GOMAXPROCS(runtime.NumCPU())
testBlockNum := 277647
blockDataFile := fmt.Sprintf("%d.dat.bz2", testBlockNum)
blocks, err := loadBlocks(blockDataFile)

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View file

@ -1,180 +0,0 @@
File path: reorgTest/blk_0_to_4.dat
Block 0:
f9beb4d9
1d010000
01000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 3ba3edfd 7a7b12b2 7ac72c3e 67768f61 7fc81bc3 888a5132 3a9fb8aa
4b1e5e4a 29ab5f49 ffff001d 1dac2b7c
01
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff4d04ff ff001d01 04455468 65205469 6d657320 30332f4a
616e2f32 30303920 4368616e 63656c6c 6f72206f 6e206272 696e6b20 6f662073
65636f6e 64206261 696c6f75 7420666f 72206261 6e6b73ff ffffff01 00f2052a
01000000 43410467 8afdb0fe 55482719 67f1a671 30b7105c d6a828e0 3909a679
62e0ea1f 61deb649 f6bc3f4c ef38c4f3 5504e51e c112de5c 384df7ba 0b8d578a
4c702b6b f11d5fac 00000000
Block 1:
f9beb4d9
d4000000
01000000 6fe28c0a b6f1b372 c1a6a246 ae63f74f 931e8365 e15a089c 68d61900
00000000 3bbd67ad e98fbbb7 0718cd80 f9e9acf9 3b5fae91 7bb2b41d 4c3bb82c
77725ca5 81ad5f49 ffff001d 44e69904
01
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff04722f 2e2bffff ffff0100 f2052a01 00000043 41046868
0737c76d abb801cb 2204f57d be4e4579 e4f710cd 67dc1b42 27592c81 e9b5cf02
b5ac9e8b 4c9f49be 5251056b 6a6d011e 4c37f6b6 d17ede6b 55faa235 19e2ac00
000000
Block 2:
f9beb4d9
95010000
01000000 13ca7940 4c11c63e ca906bbd f190b751 2872b857 1b5143ae e8cb5737
00000000 fc07c983 d7391736 0aeda657 29d0d4d3 2533eb84 76ee9d64 aa27538f
9b4fc00a d9af5f49 ffff001d 630bea22
02
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff04eb96 14e5ffff ffff0100 f2052a01 00000043 41046868
0737c76d abb801cb 2204f57d be4e4579 e4f710cd 67dc1b42 27592c81 e9b5cf02
b5ac9e8b 4c9f49be 5251056b 6a6d011e 4c37f6b6 d17ede6b 55faa235 19e2ac00
000000
01000000 0163451d 1002611c 1388d5ba 4ddfdf99 196a86b5 990fb5b0 dc786207
4fdcb8ee d2000000 004a4930 46022100 3dde52c6 5e339f45 7fe1015e 70eed208
872eb71e dd484c07 206b190e cb2ec3f8 02210011 c78dcfd0 3d43fa63 61242a33
6291ba2a 8c1ef5bc d5472126 2468f2bf 8dee4d01 ffffffff 0200ca9a 3b000000
001976a9 14cb2abd e8bccacc 32e893df 3a054b9e f7f227a4 ce88ac00 286bee00
00000019 76a914ee 26c56fc1 d942be8d 7a24b2a1 001dd894 69398088 ac000000
00
Block 3:
f9beb4d9
96020000
01000000 7d338254 0506faab 0d4cf179 45dda023 49db51f9 6233f24c 28002258
00000000 4806fe80 bf85931b 882ea645 77ca5a03 22bb8af2 3f277b20 55f160cd
972c8e8b 31b25f49 ffff001d e8f0c653
03
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff044abd 8159ffff ffff0100 f2052a01 00000043 4104b95c
249d84f4 17e3e395 a1274254 28b54067 1cc15881 eb828c17 b722a53f c599e21c
a5e56c90 f340988d 3933acc7 6beb832f d64cab07 8ddf3ce7 32923031 d1a8ac00
000000
01000000 01f287b5 e067e1cf 80f7da8a f89917b5 505094db d82412d9 35b665eb
bad253d3 77010000 008c4930 46022100 96ee0d02 b35fd61e 4960b44f f396f67e
01fe17f9 de4e0c17 b6a963bd ab2b50a6 02210034 920d4daa 7e9f8abe 5675c931
495809f9 0b9c1189 d05fbaf1 dd6696a5 b0d8f301 41046868 0737c76d abb801cb
2204f57d be4e4579 e4f710cd 67dc1b42 27592c81 e9b5cf02 b5ac9e8b 4c9f49be
5251056b 6a6d011e 4c37f6b6 d17ede6b 55faa235 19e2ffff ffff0100 286bee00
00000019 76a914c5 22664fb0 e55cdc5c 0cea73b4 aad97ec8 34323288 ac000000
00
01000000 01f287b5 e067e1cf 80f7da8a f89917b5 505094db d82412d9 35b665eb
bad253d3 77000000 008c4930 46022100 b08b922a c4bde411 1c229f92 9fe6eb6a
50161f98 1f4cf47e a9214d35 bf74d380 022100d2 f6640327 e677a1e1 cc474991
b9a48ba5 bd1e0c94 d1c8df49 f7b0193b 7ea4fa01 4104b95c 249d84f4 17e3e395
a1274254 28b54067 1cc15881 eb828c17 b722a53f c599e21c a5e56c90 f340988d
3933acc7 6beb832f d64cab07 8ddf3ce7 32923031 d1a8ffff ffff0100 ca9a3b00
00000019 76a914c5 22664fb0 e55cdc5c 0cea73b4 aad97ec8 34323288 ac000000
00
Block 4:
f9beb4d9
73010000
01000000 5da36499 06f35e09 9be42a1d 87b6dd42 11bc1400 6c220694 0807eaae
00000000 48eeeaed 2d9d8522 e6201173 743823fd 4b87cd8a ca8e6408 ec75ca38
302c2ff0 89b45f49 ffff001d 00530839
02
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff04d41d 2213ffff ffff0100 f2052a01 00000043 4104678a
fdb0fe55 48271967 f1a67130 b7105cd6 a828e039 09a67962 e0ea1f61 deb649f6
bc3f4cef 38c4f355 04e51ec1 12de5c38 4df7ba0b 8d578a4c 702b6bf1 1d5fac00
000000
01000000 0163451d 1002611c 1388d5ba 4ddfdf99 196a86b5 990fb5b0 dc786207
4fdcb8ee d2000000 004a4930 46022100 8c8fd57b 48762135 8d8f3e69 19f33e08
804736ff 83db47aa 248512e2 6df9b8ba 022100b0 c59e5ee7 bfcbfcd1 a4d83da9
55fb260e fda7f42a 25522625 a3d6f2d9 1174a701 ffffffff 0100f205 2a010000
001976a9 14c52266 4fb0e55c dc5c0cea 73b4aad9 7ec83432 3288ac00 000000
File path: reorgTest/blk_3A.dat
Block 3A:
f9beb4d9
96020000
01000000 7d338254 0506faab 0d4cf179 45dda023 49db51f9 6233f24c 28002258
00000000 5a15f573 1177a353 bdca7aab 20e16624 dfe90adc 70accadc 68016732
302c20a7 31b25f49 ffff001d 6a901440
03
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff04ad1b e7d5ffff ffff0100 f2052a01 00000043 4104ed83
704c95d8 29046f1a c2780621 1132102c 34e9ac7f fa1b7111 0658e5b9 d1bdedc4
16f5cefc 1db0625c d0c75de8 192d2b59 2d7e3b00 bcfb4a0e 860d880f d1fcac00
000000
01000000 01f287b5 e067e1cf 80f7da8a f89917b5 505094db d82412d9 35b665eb
bad253d3 77010000 008c4930 46022100 96ee0d02 b35fd61e 4960b44f f396f67e
01fe17f9 de4e0c17 b6a963bd ab2b50a6 02210034 920d4daa 7e9f8abe 5675c931
495809f9 0b9c1189 d05fbaf1 dd6696a5 b0d8f301 41046868 0737c76d abb801cb
2204f57d be4e4579 e4f710cd 67dc1b42 27592c81 e9b5cf02 b5ac9e8b 4c9f49be
5251056b 6a6d011e 4c37f6b6 d17ede6b 55faa235 19e2ffff ffff0100 286bee00
00000019 76a914c5 22664fb0 e55cdc5c 0cea73b4 aad97ec8 34323288 ac000000
00
01000000 01f287b5 e067e1cf 80f7da8a f89917b5 505094db d82412d9 35b665eb
bad253d3 77000000 008c4930 46022100 9cc67ddd aa6f592a 6b2babd4 d6ff954f
25a784cf 4fe4bb13 afb9f49b 08955119 022100a2 d99545b7 94080757 fcf2b563
f2e91287 86332f46 0ec6b90f f085fb28 41a69701 4104b95c 249d84f4 17e3e395
a1274254 28b54067 1cc15881 eb828c17 b722a53f c599e21c a5e56c90 f340988d
3933acc7 6beb832f d64cab07 8ddf3ce7 32923031 d1a8ffff ffff0100 ca9a3b00
00000019 76a914ee 26c56fc1 d942be8d 7a24b2a1 001dd894 69398088 ac000000
00
File path: reorgTest/blk_4A.dat
Block 4A:
f9beb4d9
d4000000
01000000 aae77468 2205667d 4f413a58 47cc8fe8 9795f1d5 645d5b24 1daf3c92
00000000 361c9cde a09637a0 d0c05c3b 4e7a5d91 9edb184a 0a4c7633 d92e2ddd
f04cb854 89b45f49 ffff001d 9e9aa1e8
01
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff0401b8 f3eaffff ffff0100 f2052a01 00000043 4104678a
fdb0fe55 48271967 f1a67130 b7105cd6 a828e039 09a67962 e0ea1f61 deb649f6
bc3f4cef 38c4f355 04e51ec1 12de5c38 4df7ba0b 8d578a4c 702b6bf1 1d5fac00
000000
File path: reorgTest/blk_5A.dat
Block 5A:
f9beb4d9
73010000
01000000 ebc7d0de 9c31a71b 7f41d275 2c080ba4 11e1854b d45cb2cf 8c1e4624
00000000 a607774b 79b8eb50 b52a5a32 c1754281 ec67f626 9561df28 57d1fe6a
ea82c696 e1b65f49 ffff001d 4a263577
02
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff049971 0c7dffff ffff0100 f2052a01 00000043 4104678a
fdb0fe55 48271967 f1a67130 b7105cd6 a828e039 09a67962 e0ea1f61 deb649f6
bc3f4cef 38c4f355 04e51ec1 12de5c38 4df7ba0b 8d578a4c 702b6bf1 1d5fac00
000000
01000000 0163451d 1002611c 1388d5ba 4ddfdf99 196a86b5 990fb5b0 dc786207
4fdcb8ee d2000000 004a4930 46022100 8c8fd57b 48762135 8d8f3e69 19f33e08
804736ff 83db47aa 248512e2 6df9b8ba 022100b0 c59e5ee7 bfcbfcd1 a4d83da9
55fb260e fda7f42a 25522625 a3d6f2d9 1174a701 ffffffff 0100f205 2a010000
001976a9 14c52266 4fb0e55c dc5c0cea 73b4aad9 7ec83432 3288ac00 000000

View file

@ -7,7 +7,7 @@ package blockchain
import (
"fmt"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/chaincfg/chainhash"
)
// ThresholdState define the various threshold states used when voting on
@ -302,6 +302,12 @@ func (b *BlockChain) deploymentState(prevNode *blockNode, deploymentID uint32) (
}
deployment := &b.chainParams.Deployments[deploymentID]
// added to mimic LBRYcrd:
if deployment.ForceActiveAt > 0 && prevNode != nil && prevNode.height+1 >= deployment.ForceActiveAt {
return ThresholdActive, nil
}
checker := deploymentChecker{deployment: deployment, chain: b}
cache := &b.deploymentCaches[deploymentID]
@ -310,7 +316,7 @@ func (b *BlockChain) deploymentState(prevNode *blockNode, deploymentID uint32) (
// initThresholdCaches initializes the threshold state caches for each warning
// bit and defined deployment and provides warnings if the chain is current per
// the warnUnknownVersions and warnUnknownRuleActivations functions.
// the warnUnknownRuleActivations function.
func (b *BlockChain) initThresholdCaches() error {
// Initialize the warning and deployment caches by calculating the
// threshold state for each of them. This will ensure the caches are
@ -335,15 +341,9 @@ func (b *BlockChain) initThresholdCaches() error {
}
}
// No warnings about unknown rules or versions until the chain is
// current.
// No warnings about unknown rules until the chain is current.
if b.isCurrent() {
// Warn if a high enough percentage of the last blocks have
// unexpected versions.
bestNode := b.bestChain.Tip()
if err := b.warnUnknownVersions(bestNode); err != nil {
return err
}
// Warn if any unknown new rules are either about to activate or
// have already been activated.

View file

@ -7,7 +7,7 @@ package blockchain
import (
"testing"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/chaincfg/chainhash"
)
// TestThresholdStateStringer tests the stringized output for the

View file

@ -11,9 +11,9 @@ import (
"fmt"
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/database"
"github.com/lbryio/lbcd/wire"
)
const (
@ -232,24 +232,25 @@ func determineMainChainBlocks(blocksMap map[chainhash.Hash]*blockChainContext, t
//
// The legacy format is as follows:
//
// <version><height><header code><unspentness bitmap>[<compressed txouts>,...]
// <version><height><header code><unspentness bitmap>[<compressed txouts>,...]
//
// Field Type Size
// version VLQ variable
// block height VLQ variable
// header code VLQ variable
// unspentness bitmap []byte variable
// compressed txouts
// compressed amount VLQ variable
// compressed script []byte variable
// Field Type Size
// version VLQ variable
// block height VLQ variable
// header code VLQ variable
// unspentness bitmap []byte variable
// compressed txouts
// compressed amount VLQ variable
// compressed script []byte variable
//
// The serialized header code format is:
// bit 0 - containing transaction is a coinbase
// bit 1 - output zero is unspent
// bit 2 - output one is unspent
// bits 3-x - number of bytes in unspentness bitmap. When both bits 1 and 2
// are unset, it encodes N-1 since there must be at least one unspent
// output.
//
// bit 0 - containing transaction is a coinbase
// bit 1 - output zero is unspent
// bit 2 - output one is unspent
// bits 3-x - number of bytes in unspentness bitmap. When both bits 1 and 2
// are unset, it encodes N-1 since there must be at least one unspent
// output.
//
// The rationale for the header code scheme is as follows:
// - Transactions which only pay to a single output and a change output are
@ -269,65 +270,65 @@ func determineMainChainBlocks(blocksMap map[chainhash.Hash]*blockChainContext, t
// From tx in main blockchain:
// Blk 1, 0e3e2357e806b6cdb1f70b54c3a3a17b6714ee1f0e68bebb44a74b1efd512098
//
// 010103320496b538e853519c726a2c91e61ec11600ae1390813a627c66fb8be7947be63c52
// <><><><------------------------------------------------------------------>
// | | \--------\ |
// | height | compressed txout 0
// version header code
// 010103320496b538e853519c726a2c91e61ec11600ae1390813a627c66fb8be7947be63c52
// <><><><------------------------------------------------------------------>
// | | \--------\ |
// | height | compressed txout 0
// version header code
//
// - version: 1
// - height: 1
// - header code: 0x03 (coinbase, output zero unspent, 0 bytes of unspentness)
// - unspentness: Nothing since it is zero bytes
// - compressed txout 0:
// - 0x32: VLQ-encoded compressed amount for 5000000000 (50 BTC)
// - 0x04: special script type pay-to-pubkey
// - 0x96...52: x-coordinate of the pubkey
// - version: 1
// - height: 1
// - header code: 0x03 (coinbase, output zero unspent, 0 bytes of unspentness)
// - unspentness: Nothing since it is zero bytes
// - compressed txout 0:
// - 0x32: VLQ-encoded compressed amount for 5000000000 (50 BTC)
// - 0x04: special script type pay-to-pubkey
// - 0x96...52: x-coordinate of the pubkey
//
// Example 2:
// From tx in main blockchain:
// Blk 113931, 4a16969aa4764dd7507fc1de7f0baa4850a246de90c45e59a3207f9a26b5036f
//
// 0185f90b0a011200e2ccd6ec7c6e2e581349c77e067385fa8236bf8a800900b8025be1b3efc63b0ad48e7f9f10e87544528d58
// <><----><><><------------------------------------------><-------------------------------------------->
// | | | \-------------------\ | |
// version | \--------\ unspentness | compressed txout 2
// height header code compressed txout 0
// 0185f90b0a011200e2ccd6ec7c6e2e581349c77e067385fa8236bf8a800900b8025be1b3efc63b0ad48e7f9f10e87544528d58
// <><----><><><------------------------------------------><-------------------------------------------->
// | | | \-------------------\ | |
// version | \--------\ unspentness | compressed txout 2
// height header code compressed txout 0
//
// - version: 1
// - height: 113931
// - header code: 0x0a (output zero unspent, 1 byte in unspentness bitmap)
// - unspentness: [0x01] (bit 0 is set, so output 0+2 = 2 is unspent)
// NOTE: It's +2 since the first two outputs are encoded in the header code
// - compressed txout 0:
// - 0x12: VLQ-encoded compressed amount for 20000000 (0.2 BTC)
// - 0x00: special script type pay-to-pubkey-hash
// - 0xe2...8a: pubkey hash
// - compressed txout 2:
// - 0x8009: VLQ-encoded compressed amount for 15000000 (0.15 BTC)
// - 0x00: special script type pay-to-pubkey-hash
// - 0xb8...58: pubkey hash
// - version: 1
// - height: 113931
// - header code: 0x0a (output zero unspent, 1 byte in unspentness bitmap)
// - unspentness: [0x01] (bit 0 is set, so output 0+2 = 2 is unspent)
// NOTE: It's +2 since the first two outputs are encoded in the header code
// - compressed txout 0:
// - 0x12: VLQ-encoded compressed amount for 20000000 (0.2 BTC)
// - 0x00: special script type pay-to-pubkey-hash
// - 0xe2...8a: pubkey hash
// - compressed txout 2:
// - 0x8009: VLQ-encoded compressed amount for 15000000 (0.15 BTC)
// - 0x00: special script type pay-to-pubkey-hash
// - 0xb8...58: pubkey hash
//
// Example 3:
// From tx in main blockchain:
// Blk 338156, 1b02d1c8cfef60a189017b9a420c682cf4a0028175f2f563209e4ff61c8c3620
//
// 0193d06c100000108ba5b9e763011dd46a006572d820e448e12d2bbb38640bc718e6
// <><----><><----><-------------------------------------------------->
// | | | \-----------------\ |
// version | \--------\ unspentness |
// height header code compressed txout 22
// 0193d06c100000108ba5b9e763011dd46a006572d820e448e12d2bbb38640bc718e6
// <><----><><----><-------------------------------------------------->
// | | | \-----------------\ |
// version | \--------\ unspentness |
// height header code compressed txout 22
//
// - version: 1
// - height: 338156
// - header code: 0x10 (2+1 = 3 bytes in unspentness bitmap)
// NOTE: It's +1 since neither bit 1 nor 2 are set, so N-1 is encoded.
// - unspentness: [0x00 0x00 0x10] (bit 20 is set, so output 20+2 = 22 is unspent)
// NOTE: It's +2 since the first two outputs are encoded in the header code
// - compressed txout 22:
// - 0x8ba5b9e763: VLQ-encoded compressed amount for 366875659 (3.66875659 BTC)
// - 0x01: special script type pay-to-script-hash
// - 0x1d...e6: script hash
// - version: 1
// - height: 338156
// - header code: 0x10 (2+1 = 3 bytes in unspentness bitmap)
// NOTE: It's +1 since neither bit 1 nor 2 are set, so N-1 is encoded.
// - unspentness: [0x00 0x00 0x10] (bit 20 is set, so output 20+2 = 22 is unspent)
// NOTE: It's +2 since the first two outputs are encoded in the header code
// - compressed txout 22:
// - 0x8ba5b9e763: VLQ-encoded compressed amount for 366875659 (3.66875659 BTC)
// - 0x01: special script type pay-to-script-hash
// - 0x1d...e6: script hash
func deserializeUtxoEntryV0(serialized []byte) (map[uint32]*UtxoEntry, error) {
// Deserialize the version.
//

View file

@ -7,11 +7,11 @@ package blockchain
import (
"fmt"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/database"
"github.com/lbryio/lbcd/txscript"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
// txoFlags is a bitmask defining additional information and state for a
@ -111,6 +111,22 @@ func (entry *UtxoEntry) Clone() *UtxoEntry {
}
}
// NewUtxoEntry returns a new UtxoEntry built from the arguments.
func NewUtxoEntry(
txOut *wire.TxOut, blockHeight int32, isCoinbase bool) *UtxoEntry {
var cbFlag txoFlags
if isCoinbase {
cbFlag |= tfCoinBase
}
return &UtxoEntry{
amount: txOut.Value,
pkScript: txOut.PkScript,
blockHeight: blockHeight,
packedFlags: cbFlag,
}
}
// UtxoViewpoint represents a view into the set of unspent transaction outputs
// from a specific point of view in the chain. For example, it could be for
// the end of the main chain, some point in the history of the main chain, or

View file

@ -11,11 +11,11 @@ import (
"math/big"
"time"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/txscript"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
const (
@ -40,7 +40,7 @@ const (
// baseSubsidy is the starting subsidy amount for mined blocks. This
// value is halved every SubsidyHalvingInterval blocks.
baseSubsidy = 50 * btcutil.SatoshiPerBitcoin
baseSubsidy = 500 * btcutil.SatoshiPerBitcoin
)
var (
@ -192,17 +192,44 @@ func isBIP0030Node(node *blockNode) bool {
// At the target block generation rate for the main network, this is
// approximately every 4 years.
func CalcBlockSubsidy(height int32, chainParams *chaincfg.Params) int64 {
if chainParams.SubsidyReductionInterval == 0 {
return baseSubsidy
h := int64(height)
if h == 0 {
return btcutil.SatoshiPerBitcoin * 4e8
}
if h <= 5100 {
return btcutil.SatoshiPerBitcoin
}
if h <= 55000 {
return btcutil.SatoshiPerBitcoin * (1 + (h-5001)/100)
}
// Equivalent to: baseSubsidy / 2^(height/subsidyHalvingInterval)
return baseSubsidy >> uint(height/chainParams.SubsidyReductionInterval)
lv := (h - 55001) / int64(chainParams.SubsidyReductionInterval)
reduction := (int64(math.Sqrt((float64(8*lv))+1)) - 1) / 2
for !withinLevelBounds(reduction, lv) {
if ((reduction*reduction + reduction) >> 1) > lv {
reduction--
} else {
reduction++
}
}
subsidyReduction := btcutil.SatoshiPerBitcoin * reduction
if subsidyReduction >= baseSubsidy {
return 0
}
return baseSubsidy - subsidyReduction
}
func withinLevelBounds(reduction int64, lv int64) bool {
if ((reduction*reduction + reduction) >> 1) > lv {
return false
}
reduction++
return ((reduction*reduction + reduction) >> 1) > lv
}
// CheckTransactionSanity performs some preliminary checks on a transaction to
// ensure it is sane. These checks are context free.
func CheckTransactionSanity(tx *btcutil.Tx) error {
// ensure it is sane.
func CheckTransactionSanity(tx *btcutil.Tx, enforceSoftFork bool) error {
// A transaction must have at least one input.
msgTx := tx.MsgTx()
if len(msgTx.TxIn) == 0 {
@ -261,6 +288,11 @@ func CheckTransactionSanity(tx *btcutil.Tx) error {
btcutil.MaxSatoshi)
return ruleError(ErrBadTxOutValue, str)
}
err := txscript.AllClaimsAreSane(txOut.PkScript, enforceSoftFork)
if err != nil {
return ruleError(ErrBadTxOutValue, err.Error())
}
}
// Check for duplicate transaction inputs.
@ -302,8 +334,8 @@ func CheckTransactionSanity(tx *btcutil.Tx) error {
// target difficulty as claimed.
//
// The flags modify the behavior of this function as follows:
// - BFNoPoWCheck: The check to ensure the block hash is less than the target
// difficulty is not performed.
// - BFNoPoWCheck: The check to ensure the block hash is less than the target
// difficulty is not performed.
func checkProofOfWork(header *wire.BlockHeader, powLimit *big.Int, flags BehaviorFlags) error {
// The target difficulty must be larger than zero.
target := CompactToBig(header.Bits)
@ -324,7 +356,7 @@ func checkProofOfWork(header *wire.BlockHeader, powLimit *big.Int, flags Behavio
// to avoid proof of work checks is set.
if flags&BFNoPoWCheck != BFNoPoWCheck {
// The block hash must be less than the claimed target.
hash := header.BlockHash()
hash := header.BlockPoWHash()
hashNum := HashToBig(&hash)
if hashNum.Cmp(target) > 0 {
str := fmt.Sprintf("block hash of %064x is higher than "+
@ -515,7 +547,7 @@ func checkBlockSanity(block *btcutil.Block, powLimit *big.Int, timeSource Median
// Do some preliminary checks on each transaction to ensure they are
// sane before continuing.
for _, tx := range transactions {
err := CheckTransactionSanity(tx)
err := CheckTransactionSanity(tx, false)
if err != nil {
return err
}
@ -637,8 +669,8 @@ func checkSerializedHeight(coinbaseTx *btcutil.Tx, wantHeight int32) error {
// which depend on its position within the block chain.
//
// The flags modify the behavior of this function as follows:
// - BFFastAdd: All checks except those involving comparing the header against
// the checkpoints are not performed.
// - BFFastAdd: All checks except those involving comparing the header against
// the checkpoints are not performed.
//
// This function MUST be called with the chain state lock held (for writes).
func (b *BlockChain) checkBlockHeaderContext(header *wire.BlockHeader, prevNode *blockNode, flags BehaviorFlags) error {
@ -716,8 +748,8 @@ func (b *BlockChain) checkBlockHeaderContext(header *wire.BlockHeader, prevNode
// on its position within the block chain.
//
// The flags modify the behavior of this function as follows:
// - BFFastAdd: The transaction are not checked to see if they are finalized
// and the somewhat expensive BIP0034 validation is not performed.
// - BFFastAdd: The transaction are not checked to see if they are finalized
// and the somewhat expensive BIP0034 validation is not performed.
//
// The flags are also passed to checkBlockHeaderContext. See its documentation
// for how the flags modify its behavior.
@ -877,7 +909,6 @@ func CheckTransactionInputs(tx *btcutil.Tx, txHeight int32, utxoView *UtxoViewpo
return 0, nil
}
txHash := tx.Hash()
var totalSatoshiIn int64
for txInIndex, txIn := range tx.MsgTx().TxIn {
// Ensure the referenced input transaction is available.
@ -954,7 +985,7 @@ func CheckTransactionInputs(tx *btcutil.Tx, txHeight int32, utxoView *UtxoViewpo
if totalSatoshiIn < totalSatoshiOut {
str := fmt.Sprintf("total value of all transaction inputs for "+
"transaction %v is %v which is less than the amount "+
"spent of %v", txHash, totalSatoshiIn, totalSatoshiOut)
"spent of %v", tx.Hash(), totalSatoshiIn, totalSatoshiOut)
return 0, ruleError(ErrSpendTooHigh, str)
}

View file

@ -5,15 +5,16 @@
package blockchain
import (
"encoding/hex"
"math"
"reflect"
"testing"
"time"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
// TestSequenceLocksActive tests the SequenceLockActive function to ensure it
@ -63,96 +64,11 @@ func TestSequenceLocksActive(t *testing.T) {
}
}
// TestCheckConnectBlockTemplate tests the CheckConnectBlockTemplate function to
// ensure it fails.
func TestCheckConnectBlockTemplate(t *testing.T) {
// Create a new database and chain instance to run tests against.
chain, teardownFunc, err := chainSetup("checkconnectblocktemplate",
&chaincfg.MainNetParams)
if err != nil {
t.Errorf("Failed to setup chain instance: %v", err)
return
}
defer teardownFunc()
// Since we're not dealing with the real block chain, set the coinbase
// maturity to 1.
chain.TstSetCoinbaseMaturity(1)
// Load up blocks such that there is a side chain.
// (genesis block) -> 1 -> 2 -> 3 -> 4
// \-> 3a
testFiles := []string{
"blk_0_to_4.dat.bz2",
"blk_3A.dat.bz2",
}
var blocks []*btcutil.Block
for _, file := range testFiles {
blockTmp, err := loadBlocks(file)
if err != nil {
t.Fatalf("Error loading file: %v\n", err)
}
blocks = append(blocks, blockTmp...)
}
for i := 1; i <= 3; i++ {
isMainChain, _, err := chain.ProcessBlock(blocks[i], BFNone)
if err != nil {
t.Fatalf("CheckConnectBlockTemplate: Received unexpected error "+
"processing block %d: %v", i, err)
}
if !isMainChain {
t.Fatalf("CheckConnectBlockTemplate: Expected block %d to connect "+
"to main chain", i)
}
}
// Block 3 should fail to connect since it's already inserted.
err = chain.CheckConnectBlockTemplate(blocks[3])
if err == nil {
t.Fatal("CheckConnectBlockTemplate: Did not received expected error " +
"on block 3")
}
// Block 4 should connect successfully to tip of chain.
err = chain.CheckConnectBlockTemplate(blocks[4])
if err != nil {
t.Fatalf("CheckConnectBlockTemplate: Received unexpected error on "+
"block 4: %v", err)
}
// Block 3a should fail to connect since does not build on chain tip.
err = chain.CheckConnectBlockTemplate(blocks[5])
if err == nil {
t.Fatal("CheckConnectBlockTemplate: Did not received expected error " +
"on block 3a")
}
// Block 4 should connect even if proof of work is invalid.
invalidPowBlock := *blocks[4].MsgBlock()
invalidPowBlock.Header.Nonce++
err = chain.CheckConnectBlockTemplate(btcutil.NewBlock(&invalidPowBlock))
if err != nil {
t.Fatalf("CheckConnectBlockTemplate: Received unexpected error on "+
"block 4 with bad nonce: %v", err)
}
// Invalid block building on chain tip should fail to connect.
invalidBlock := *blocks[4].MsgBlock()
invalidBlock.Header.Bits--
err = chain.CheckConnectBlockTemplate(btcutil.NewBlock(&invalidBlock))
if err == nil {
t.Fatal("CheckConnectBlockTemplate: Did not received expected error " +
"on block 4 with invalid difficulty bits")
}
}
// TestCheckBlockSanity tests the CheckBlockSanity function to ensure it works
// as expected.
func TestCheckBlockSanity(t *testing.T) {
powLimit := chaincfg.MainNetParams.PowLimit
block := btcutil.NewBlock(&Block100000)
block := GetBlock100000()
timeSource := NewMedianTime()
err := CheckBlockSanity(block, powLimit, timeSource)
if err != nil {
@ -234,254 +150,12 @@ func TestCheckSerializedHeight(t *testing.T) {
}
}
// Block100000 defines block 100,000 of the block chain. It is used to
var block100000Hex = "0000002024cbdc8644ee3983e66b003a0733891c069ca74c114c034c7b3e2e7ad7a12cd67e95e0555c0e056f6f2af538268ff9d21b420e529750d08eacb25c40f1322936637109b8a051157604c1c163cd39237687f6244b4e6d2b3a94e9d816babaecbb10c56058c811041b2b9c43000701000000010000000000000000000000000000000000000000000000000000000000000000ffffffff2003a086010410c56058081011314abf0100000d2f6e6f64655374726174756d2f000000000180354a6e0a0000001976a914b5e74e7cc9e1f480a6599895c92aab1401f870f188ac000000000100000002f1b75decc2c1c59c2178852822de412f574ad9796b65ac9092a79630d8109aaf000000006a47304402202f78ed3bf8dcadb6c17e289cd06e9c96b02c6f23aa1f446a4a7896a31cfd1e4702202862261e2eb59475ac91092c620b3cac5a831372bafc446d5ee450866040b532012103db4f3785354d84311fab7624c52784a61e3046d8f364463d327bdd96281b5b90feffffff987ee6b4bf95548d01e443683261dd0ffdcb2eb335b2f7119df0d41b60756b92010000006a47304402200c422c7560b6418d45443138bb957ec44eb293a639f4b2235a622205ca6cac370220759f15d5dc2543fd1aef80104c93427fcb025847bf895669025d1d82c62fbf6801210201864b998db5436990a0029fc3fb153c09e8c2689214b91c8ed68784995c8da0feffffff022bccfedd000000001976a914738f16132942a01d00cb9699bd058c4925aada3288ac1f4d030c000000001976a914c4292e757f5ff6a27c2f0a87d3a4aea5e46c275a88ac9f86010001000000015fbb26ad6d818186032baeef4d3f475dfe165c6da2d93411b8ec5f9f523cf1a4000000006a4730440220356999ad5a9f6f09b676f17dd4a2617a0af234722d68a50edc3768c983c0623d022056b4e5531608aeb0205fde9c44f741158da3bba1f4c3788c9fe79d36d43ea355012103509893a2a7c765d49ac9ff70126cb1af54871d70baba2c7e39ec9b4096289a9bfeffffff02389332fa080000001976a914f85e054405fbcedc2341cf8bf41ea989090587a288acf9321a41000000001976a914e85e90c048fdfbe1c2117a7132856ff4b39b470188ac9f86010001000000013508189b9bb61ac2aa536b905447b58f6c58c17cdef305240f566caa689d760a010000006a4730440220669a2b86e5abe58bae54829d3c271669540a9ad965c2fb79e8cc1fb609c0b60002202f958403d979138075cb53d8cb5ff6bb14e18d66dfdb6701c7d43a8ceeed0fa80121029935a602205a3fb72446064f3bc3a55ab9cd2e3459bf2ffdf80a48ab029d4b42feffffff02523c2f13000000001976a914c5b2ae398496f0f9ceaf81b83c28a27ddc890e3588ac211958f2000000001976a914ac65f1d16e5a2af37408b5d65406000a7ea143ca88ac9f8601000100000001bdd724322c555a21d5eb62d4aadbdc051663bcd4ec03f8d9005992f299783c21000000006a47304402205448141a2a788f73310025123bd24f5bee01dd8f48f18d7abc62d7c26465008902207ab46e6ddf6ba416decf3fbb97b6946a1428ea0a7c25a55cab47c47110d8e9ce0121029d6ff3b1235f2a08560b23dd4a08b14cc415b544801b885365736ea8ab1d3470feffffff029d104ccf000000001976a914999d5b0e3d5efcf601c711127b91841afbf5c37a88ace5c5a07f070000001976a9144aade372298eb387da9b6ac69d215a213e822f3f88ac9f86010001000000011658304d4ce796cd450228a10fdf647c6ea42295c9f5e1663df11481af1c884d010000006b483045022100a35d5d3ccde85b41559047d976ae6210b8e6ba5653c53aae1adc90048de0761002200d6bd6ebc6d73f97855f435c6fd595009ee71d23bb34870ab83ad53f67eeb22b012102d2f681ebfd1a570416d602986a47ca4254d8dedf2935b3f8c2ba55dcee8e98f4feffffff025ee913e6020000001976a91468076c9380d3c6b468ad1d6109c36770fb181e8f88acb165394f000000001976a9147ae970e81b3657cbb59df26517e372165807be0088ac9f86010001000000018f285109f78524a88ff328a4f94de2ac03224c50984b11c68adda192e8f78efa010000006b483045022100d77f2ac32dd6a3015f02f7115a13f617c57952fc5d53a33a87dc3fc00ffe1864022006455f74cff864b10424e445c530a59243f86d309dc92c5010ec5709e38471ab012102fdac7335b41edcd2846fc7e2166bb48312ee583ed6ff70fb5c27bcb2addaad86feffffff028b7a6d5c000000001976a914c65106d2e7ea4ec6aa8aa30ba4d11cfd1143123388ac5934c228000000001976a914d1c4d190b07edb972b91f33c36c6568b80358dd488ac9f860100"
// GetBlock100000 defines block 100,000 of the block chain. It is used to
// test Block operations.
var Block100000 = wire.MsgBlock{
Header: wire.BlockHeader{
Version: 1,
PrevBlock: chainhash.Hash([32]byte{ // Make go vet happy.
0x50, 0x12, 0x01, 0x19, 0x17, 0x2a, 0x61, 0x04,
0x21, 0xa6, 0xc3, 0x01, 0x1d, 0xd3, 0x30, 0xd9,
0xdf, 0x07, 0xb6, 0x36, 0x16, 0xc2, 0xcc, 0x1f,
0x1c, 0xd0, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00,
}), // 000000000002d01c1fccc21636b607dfd930d31d01c3a62104612a1719011250
MerkleRoot: chainhash.Hash([32]byte{ // Make go vet happy.
0x66, 0x57, 0xa9, 0x25, 0x2a, 0xac, 0xd5, 0xc0,
0xb2, 0x94, 0x09, 0x96, 0xec, 0xff, 0x95, 0x22,
0x28, 0xc3, 0x06, 0x7c, 0xc3, 0x8d, 0x48, 0x85,
0xef, 0xb5, 0xa4, 0xac, 0x42, 0x47, 0xe9, 0xf3,
}), // f3e94742aca4b5ef85488dc37c06c3282295ffec960994b2c0d5ac2a25a95766
Timestamp: time.Unix(1293623863, 0), // 2010-12-29 11:57:43 +0000 UTC
Bits: 0x1b04864c, // 453281356
Nonce: 0x10572b0f, // 274148111
},
Transactions: []*wire.MsgTx{
{
Version: 1,
TxIn: []*wire.TxIn{
{
PreviousOutPoint: wire.OutPoint{
Hash: chainhash.Hash{},
Index: 0xffffffff,
},
SignatureScript: []byte{
0x04, 0x4c, 0x86, 0x04, 0x1b, 0x02, 0x06, 0x02,
},
Sequence: 0xffffffff,
},
},
TxOut: []*wire.TxOut{
{
Value: 0x12a05f200, // 5000000000
PkScript: []byte{
0x41, // OP_DATA_65
0x04, 0x1b, 0x0e, 0x8c, 0x25, 0x67, 0xc1, 0x25,
0x36, 0xaa, 0x13, 0x35, 0x7b, 0x79, 0xa0, 0x73,
0xdc, 0x44, 0x44, 0xac, 0xb8, 0x3c, 0x4e, 0xc7,
0xa0, 0xe2, 0xf9, 0x9d, 0xd7, 0x45, 0x75, 0x16,
0xc5, 0x81, 0x72, 0x42, 0xda, 0x79, 0x69, 0x24,
0xca, 0x4e, 0x99, 0x94, 0x7d, 0x08, 0x7f, 0xed,
0xf9, 0xce, 0x46, 0x7c, 0xb9, 0xf7, 0xc6, 0x28,
0x70, 0x78, 0xf8, 0x01, 0xdf, 0x27, 0x6f, 0xdf,
0x84, // 65-byte signature
0xac, // OP_CHECKSIG
},
},
},
LockTime: 0,
},
{
Version: 1,
TxIn: []*wire.TxIn{
{
PreviousOutPoint: wire.OutPoint{
Hash: chainhash.Hash([32]byte{ // Make go vet happy.
0x03, 0x2e, 0x38, 0xe9, 0xc0, 0xa8, 0x4c, 0x60,
0x46, 0xd6, 0x87, 0xd1, 0x05, 0x56, 0xdc, 0xac,
0xc4, 0x1d, 0x27, 0x5e, 0xc5, 0x5f, 0xc0, 0x07,
0x79, 0xac, 0x88, 0xfd, 0xf3, 0x57, 0xa1, 0x87,
}), // 87a157f3fd88ac7907c05fc55e271dc4acdc5605d187d646604ca8c0e9382e03
Index: 0,
},
SignatureScript: []byte{
0x49, // OP_DATA_73
0x30, 0x46, 0x02, 0x21, 0x00, 0xc3, 0x52, 0xd3,
0xdd, 0x99, 0x3a, 0x98, 0x1b, 0xeb, 0xa4, 0xa6,
0x3a, 0xd1, 0x5c, 0x20, 0x92, 0x75, 0xca, 0x94,
0x70, 0xab, 0xfc, 0xd5, 0x7d, 0xa9, 0x3b, 0x58,
0xe4, 0xeb, 0x5d, 0xce, 0x82, 0x02, 0x21, 0x00,
0x84, 0x07, 0x92, 0xbc, 0x1f, 0x45, 0x60, 0x62,
0x81, 0x9f, 0x15, 0xd3, 0x3e, 0xe7, 0x05, 0x5c,
0xf7, 0xb5, 0xee, 0x1a, 0xf1, 0xeb, 0xcc, 0x60,
0x28, 0xd9, 0xcd, 0xb1, 0xc3, 0xaf, 0x77, 0x48,
0x01, // 73-byte signature
0x41, // OP_DATA_65
0x04, 0xf4, 0x6d, 0xb5, 0xe9, 0xd6, 0x1a, 0x9d,
0xc2, 0x7b, 0x8d, 0x64, 0xad, 0x23, 0xe7, 0x38,
0x3a, 0x4e, 0x6c, 0xa1, 0x64, 0x59, 0x3c, 0x25,
0x27, 0xc0, 0x38, 0xc0, 0x85, 0x7e, 0xb6, 0x7e,
0xe8, 0xe8, 0x25, 0xdc, 0xa6, 0x50, 0x46, 0xb8,
0x2c, 0x93, 0x31, 0x58, 0x6c, 0x82, 0xe0, 0xfd,
0x1f, 0x63, 0x3f, 0x25, 0xf8, 0x7c, 0x16, 0x1b,
0xc6, 0xf8, 0xa6, 0x30, 0x12, 0x1d, 0xf2, 0xb3,
0xd3, // 65-byte pubkey
},
Sequence: 0xffffffff,
},
},
TxOut: []*wire.TxOut{
{
Value: 0x2123e300, // 556000000
PkScript: []byte{
0x76, // OP_DUP
0xa9, // OP_HASH160
0x14, // OP_DATA_20
0xc3, 0x98, 0xef, 0xa9, 0xc3, 0x92, 0xba, 0x60,
0x13, 0xc5, 0xe0, 0x4e, 0xe7, 0x29, 0x75, 0x5e,
0xf7, 0xf5, 0x8b, 0x32,
0x88, // OP_EQUALVERIFY
0xac, // OP_CHECKSIG
},
},
{
Value: 0x108e20f00, // 4444000000
PkScript: []byte{
0x76, // OP_DUP
0xa9, // OP_HASH160
0x14, // OP_DATA_20
0x94, 0x8c, 0x76, 0x5a, 0x69, 0x14, 0xd4, 0x3f,
0x2a, 0x7a, 0xc1, 0x77, 0xda, 0x2c, 0x2f, 0x6b,
0x52, 0xde, 0x3d, 0x7c,
0x88, // OP_EQUALVERIFY
0xac, // OP_CHECKSIG
},
},
},
LockTime: 0,
},
{
Version: 1,
TxIn: []*wire.TxIn{
{
PreviousOutPoint: wire.OutPoint{
Hash: chainhash.Hash([32]byte{ // Make go vet happy.
0xc3, 0x3e, 0xbf, 0xf2, 0xa7, 0x09, 0xf1, 0x3d,
0x9f, 0x9a, 0x75, 0x69, 0xab, 0x16, 0xa3, 0x27,
0x86, 0xaf, 0x7d, 0x7e, 0x2d, 0xe0, 0x92, 0x65,
0xe4, 0x1c, 0x61, 0xd0, 0x78, 0x29, 0x4e, 0xcf,
}), // cf4e2978d0611ce46592e02d7e7daf8627a316ab69759a9f3df109a7f2bf3ec3
Index: 1,
},
SignatureScript: []byte{
0x47, // OP_DATA_71
0x30, 0x44, 0x02, 0x20, 0x03, 0x2d, 0x30, 0xdf,
0x5e, 0xe6, 0xf5, 0x7f, 0xa4, 0x6c, 0xdd, 0xb5,
0xeb, 0x8d, 0x0d, 0x9f, 0xe8, 0xde, 0x6b, 0x34,
0x2d, 0x27, 0x94, 0x2a, 0xe9, 0x0a, 0x32, 0x31,
0xe0, 0xba, 0x33, 0x3e, 0x02, 0x20, 0x3d, 0xee,
0xe8, 0x06, 0x0f, 0xdc, 0x70, 0x23, 0x0a, 0x7f,
0x5b, 0x4a, 0xd7, 0xd7, 0xbc, 0x3e, 0x62, 0x8c,
0xbe, 0x21, 0x9a, 0x88, 0x6b, 0x84, 0x26, 0x9e,
0xae, 0xb8, 0x1e, 0x26, 0xb4, 0xfe, 0x01,
0x41, // OP_DATA_65
0x04, 0xae, 0x31, 0xc3, 0x1b, 0xf9, 0x12, 0x78,
0xd9, 0x9b, 0x83, 0x77, 0xa3, 0x5b, 0xbc, 0xe5,
0xb2, 0x7d, 0x9f, 0xff, 0x15, 0x45, 0x68, 0x39,
0xe9, 0x19, 0x45, 0x3f, 0xc7, 0xb3, 0xf7, 0x21,
0xf0, 0xba, 0x40, 0x3f, 0xf9, 0x6c, 0x9d, 0xee,
0xb6, 0x80, 0xe5, 0xfd, 0x34, 0x1c, 0x0f, 0xc3,
0xa7, 0xb9, 0x0d, 0xa4, 0x63, 0x1e, 0xe3, 0x95,
0x60, 0x63, 0x9d, 0xb4, 0x62, 0xe9, 0xcb, 0x85,
0x0f, // 65-byte pubkey
},
Sequence: 0xffffffff,
},
},
TxOut: []*wire.TxOut{
{
Value: 0xf4240, // 1000000
PkScript: []byte{
0x76, // OP_DUP
0xa9, // OP_HASH160
0x14, // OP_DATA_20
0xb0, 0xdc, 0xbf, 0x97, 0xea, 0xbf, 0x44, 0x04,
0xe3, 0x1d, 0x95, 0x24, 0x77, 0xce, 0x82, 0x2d,
0xad, 0xbe, 0x7e, 0x10,
0x88, // OP_EQUALVERIFY
0xac, // OP_CHECKSIG
},
},
{
Value: 0x11d260c0, // 299000000
PkScript: []byte{
0x76, // OP_DUP
0xa9, // OP_HASH160
0x14, // OP_DATA_20
0x6b, 0x12, 0x81, 0xee, 0xc2, 0x5a, 0xb4, 0xe1,
0xe0, 0x79, 0x3f, 0xf4, 0xe0, 0x8a, 0xb1, 0xab,
0xb3, 0x40, 0x9c, 0xd9,
0x88, // OP_EQUALVERIFY
0xac, // OP_CHECKSIG
},
},
},
LockTime: 0,
},
{
Version: 1,
TxIn: []*wire.TxIn{
{
PreviousOutPoint: wire.OutPoint{
Hash: chainhash.Hash([32]byte{ // Make go vet happy.
0x0b, 0x60, 0x72, 0xb3, 0x86, 0xd4, 0xa7, 0x73,
0x23, 0x52, 0x37, 0xf6, 0x4c, 0x11, 0x26, 0xac,
0x3b, 0x24, 0x0c, 0x84, 0xb9, 0x17, 0xa3, 0x90,
0x9b, 0xa1, 0xc4, 0x3d, 0xed, 0x5f, 0x51, 0xf4,
}), // f4515fed3dc4a19b90a317b9840c243bac26114cf637522373a7d486b372600b
Index: 0,
},
SignatureScript: []byte{
0x49, // OP_DATA_73
0x30, 0x46, 0x02, 0x21, 0x00, 0xbb, 0x1a, 0xd2,
0x6d, 0xf9, 0x30, 0xa5, 0x1c, 0xce, 0x11, 0x0c,
0xf4, 0x4f, 0x7a, 0x48, 0xc3, 0xc5, 0x61, 0xfd,
0x97, 0x75, 0x00, 0xb1, 0xae, 0x5d, 0x6b, 0x6f,
0xd1, 0x3d, 0x0b, 0x3f, 0x4a, 0x02, 0x21, 0x00,
0xc5, 0xb4, 0x29, 0x51, 0xac, 0xed, 0xff, 0x14,
0xab, 0xba, 0x27, 0x36, 0xfd, 0x57, 0x4b, 0xdb,
0x46, 0x5f, 0x3e, 0x6f, 0x8d, 0xa1, 0x2e, 0x2c,
0x53, 0x03, 0x95, 0x4a, 0xca, 0x7f, 0x78, 0xf3,
0x01, // 73-byte signature
0x41, // OP_DATA_65
0x04, 0xa7, 0x13, 0x5b, 0xfe, 0x82, 0x4c, 0x97,
0xec, 0xc0, 0x1e, 0xc7, 0xd7, 0xe3, 0x36, 0x18,
0x5c, 0x81, 0xe2, 0xaa, 0x2c, 0x41, 0xab, 0x17,
0x54, 0x07, 0xc0, 0x94, 0x84, 0xce, 0x96, 0x94,
0xb4, 0x49, 0x53, 0xfc, 0xb7, 0x51, 0x20, 0x65,
0x64, 0xa9, 0xc2, 0x4d, 0xd0, 0x94, 0xd4, 0x2f,
0xdb, 0xfd, 0xd5, 0xaa, 0xd3, 0xe0, 0x63, 0xce,
0x6a, 0xf4, 0xcf, 0xaa, 0xea, 0x4e, 0xa1, 0x4f,
0xbb, // 65-byte pubkey
},
Sequence: 0xffffffff,
},
},
TxOut: []*wire.TxOut{
{
Value: 0xf4240, // 1000000
PkScript: []byte{
0x76, // OP_DUP
0xa9, // OP_HASH160
0x14, // OP_DATA_20
0x39, 0xaa, 0x3d, 0x56, 0x9e, 0x06, 0xa1, 0xd7,
0x92, 0x6d, 0xc4, 0xbe, 0x11, 0x93, 0xc9, 0x9b,
0xf2, 0xeb, 0x9e, 0xe0,
0x88, // OP_EQUALVERIFY
0xac, // OP_CHECKSIG
},
},
},
LockTime: 0,
},
},
func GetBlock100000() *btcutil.Block {
var block100000Bytes, _ = hex.DecodeString(block100000Hex)
var results, _ = btcutil.NewBlockFromBytes(block100000Bytes)
return results
}

View file

@ -7,7 +7,7 @@ package blockchain
import (
"math"
"github.com/btcsuite/btcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg"
)
const (
@ -26,15 +26,6 @@ const (
// vbNumBits is the total number of bits available for use with the
// version bits scheme.
vbNumBits = 29
// unknownVerNumToCheck is the number of previous blocks to consider
// when checking for a threshold of unknown block versions for the
// purposes of warning the user.
unknownVerNumToCheck = 100
// unknownVerWarnNum is the threshold of previous blocks that have an
// unknown version to use for the purposes of warning the user.
unknownVerWarnNum = unknownVerNumToCheck / 2
)
// bitConditionChecker provides a thresholdConditionChecker which can be used to
@ -204,6 +195,12 @@ func (b *BlockChain) calcNextBlockVersion(prevNode *blockNode) (int32, error) {
expectedVersion := uint32(vbTopBits)
for id := 0; id < len(b.chainParams.Deployments); id++ {
deployment := &b.chainParams.Deployments[id]
// added to mimic LBRYcrd:
if deployment.ForceActiveAt > 0 && prevNode != nil && prevNode.height+1 >= deployment.ForceActiveAt {
continue
}
cache := &b.deploymentCaches[id]
checker := deploymentChecker{deployment: deployment, chain: b}
state, err := b.thresholdState(prevNode, checker, cache)
@ -264,38 +261,3 @@ func (b *BlockChain) warnUnknownRuleActivations(node *blockNode) error {
return nil
}
// warnUnknownVersions logs a warning if a high enough percentage of the last
// blocks have unexpected versions.
//
// This function MUST be called with the chain state lock held (for writes)
func (b *BlockChain) warnUnknownVersions(node *blockNode) error {
// Nothing to do if already warned.
if b.unknownVersionsWarned {
return nil
}
// Warn if enough previous blocks have unexpected versions.
numUpgraded := uint32(0)
for i := uint32(0); i < unknownVerNumToCheck && node != nil; i++ {
expectedVersion, err := b.calcNextBlockVersion(node.parent)
if err != nil {
return err
}
if expectedVersion > vbLegacyBlockVersion &&
(node.version & ^expectedVersion) != 0 {
numUpgraded++
}
node = node.parent
}
if numUpgraded > unknownVerWarnNum {
log.Warn("Unknown block versions are being mined, so new " +
"rules might be in effect. Are you running the " +
"latest version of the software?")
b.unknownVersionsWarned = true
}
return nil
}

View file

@ -7,9 +7,9 @@ package blockchain
import (
"fmt"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/lbryio/lbcd/txscript"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
const (
@ -20,11 +20,11 @@ const (
// weight of a "base" byte is 4, while the weight of a witness byte is
// 1. As a result, for a block to be valid, the BlockWeight MUST be
// less than, or equal to MaxBlockWeight.
MaxBlockWeight = 4000000
MaxBlockWeight = 8000000
// MaxBlockBaseSize is the maximum number of bytes within a block
// which can be allocated to non-witness data.
MaxBlockBaseSize = 1000000
MaxBlockBaseSize = 8000000
// MaxBlockSigOpsCost is the maximum number of signature operations
// allowed for a block. It is calculated via a weighted algorithm which

View file

@ -1,68 +1,11 @@
btcec
=====
[![Build Status](https://travis-ci.org/btcsuite/btcd.png?branch=master)](https://travis-ci.org/btcsuite/btcec)
[![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://godoc.org/github.com/btcsuite/btcd/btcec?status.png)](http://godoc.org/github.com/btcsuite/btcd/btcec)
Package btcec implements elliptic curve cryptography needed for working with
btcec implements elliptic curve cryptography needed for working with
Bitcoin (secp256k1 only for now). It is designed so that it may be used with the
standard crypto/ecdsa packages provided with go. A comprehensive suite of test
is provided to ensure proper functionality. Package btcec was originally based
on work from ThePiachu which is licensed under the same terms as Go, but it has
signficantly diverged since then. The btcsuite developers original is licensed
under the liberal ISC license.
Although this package was primarily written for btcd, it has intentionally been
designed so it can be used as a standalone package for any projects needing to
use secp256k1 elliptic curve cryptography.
## Installation and Updating
```bash
$ go get -u github.com/btcsuite/btcd/btcec
```
## Examples
* [Sign Message](http://godoc.org/github.com/btcsuite/btcd/btcec#example-package--SignMessage)
Demonstrates signing a message with a secp256k1 private key that is first
parsed form raw bytes and serializing the generated signature.
* [Verify Signature](http://godoc.org/github.com/btcsuite/btcd/btcec#example-package--VerifySignature)
Demonstrates verifying a secp256k1 signature against a public key that is
first parsed from raw bytes. The signature is also parsed from raw bytes.
* [Encryption](http://godoc.org/github.com/btcsuite/btcd/btcec#example-package--EncryptMessage)
Demonstrates encrypting a message for a public key that is first parsed from
raw bytes, then decrypting it using the corresponding private key.
* [Decryption](http://godoc.org/github.com/btcsuite/btcd/btcec#example-package--DecryptMessage)
Demonstrates decrypting a message using a private key that is first parsed
from raw bytes.
## GPG Verification Key
All official release tags are signed by Conformal so users can ensure the code
has not been tampered with and is coming from the btcsuite developers. To
verify the signature perform the following:
- Download the public key from the Conformal website at
https://opensource.conformal.com/GIT-GPG-KEY-conformal.txt
- Import the public key into your GPG keyring:
```bash
gpg --import GIT-GPG-KEY-conformal.txt
```
- Verify the release tag with the following command where `TAG_NAME` is a
placeholder for the specific tag:
```bash
git tag -v TAG_NAME
```
## License
Package btcec is licensed under the [copyfree](http://copyfree.org) ISC License
except for btcec.go and btcec_test.go which is under the same license as Go.
signficantly diverged since then.

View file

@ -930,6 +930,8 @@ func initS256() {
secp256k1.Gx = fromHex("79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798")
secp256k1.Gy = fromHex("483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8")
secp256k1.BitSize = 256
// Curve name taken from https://safecurves.cr.yp.to/.
secp256k1.Name = "secp256k1"
secp256k1.q = new(big.Int).Div(new(big.Int).Add(secp256k1.P,
big.NewInt(1)), big.NewInt(4))
secp256k1.H = 1

View file

@ -527,7 +527,7 @@ type baseMultTest struct {
x, y string
}
//TODO: add more test vectors
// TODO: add more test vectors
var s256BaseMultTests = []baseMultTest{
{
"AA5E28D6A97A2479A65527F7290311A3624D4CC0FA1578598EE3C2613BF99522",
@ -556,7 +556,7 @@ var s256BaseMultTests = []baseMultTest{
},
}
//TODO: test different curves as well?
// TODO: test different curves as well?
func TestBaseMult(t *testing.T) {
s256 := S256()
for i, e := range s256BaseMultTests {

View file

@ -8,8 +8,8 @@ import (
"encoding/hex"
"fmt"
"github.com/btcsuite/btcd/btcec"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/btcec"
"github.com/lbryio/lbcd/chaincfg/chainhash"
)
// This example demonstrates signing a message with a secp256k1 private key that

View file

@ -125,27 +125,30 @@ var (
// the arithmetic needed for elliptic curve operations.
//
// The following depicts the internal representation:
// -----------------------------------------------------------------
// | n[9] | n[8] | ... | n[0] |
// | 32 bits available | 32 bits available | ... | 32 bits available |
// | 22 bits for value | 26 bits for value | ... | 26 bits for value |
// | 10 bits overflow | 6 bits overflow | ... | 6 bits overflow |
// | Mult: 2^(26*9) | Mult: 2^(26*8) | ... | Mult: 2^(26*0) |
// -----------------------------------------------------------------
//
// -----------------------------------------------------------------
// | n[9] | n[8] | ... | n[0] |
// | 32 bits available | 32 bits available | ... | 32 bits available |
// | 22 bits for value | 26 bits for value | ... | 26 bits for value |
// | 10 bits overflow | 6 bits overflow | ... | 6 bits overflow |
// | Mult: 2^(26*9) | Mult: 2^(26*8) | ... | Mult: 2^(26*0) |
// -----------------------------------------------------------------
//
// For example, consider the number 2^49 + 1. It would be represented as:
// n[0] = 1
// n[1] = 2^23
// n[2..9] = 0
//
// n[0] = 1
// n[1] = 2^23
// n[2..9] = 0
//
// The full 256-bit value is then calculated by looping i from 9..0 and
// doing sum(n[i] * 2^(26i)) like so:
// n[9] * 2^(26*9) = 0 * 2^234 = 0
// n[8] * 2^(26*8) = 0 * 2^208 = 0
// ...
// n[1] * 2^(26*1) = 2^23 * 2^26 = 2^49
// n[0] * 2^(26*0) = 1 * 2^0 = 1
// Sum: 0 + 0 + ... + 2^49 + 1 = 2^49 + 1
//
// n[9] * 2^(26*9) = 0 * 2^234 = 0
// n[8] * 2^(26*8) = 0 * 2^208 = 0
// ...
// n[1] * 2^(26*1) = 2^23 * 2^26 = 2^49
// n[0] * 2^(26*0) = 1 * 2^0 = 1
// Sum: 0 + 0 + ... + 2^49 + 1 = 2^49 + 1
type fieldVal struct {
n [10]uint32
}
@ -226,20 +229,24 @@ func (f *fieldVal) SetBytes(b *[32]byte) *fieldVal {
return f
}
// SetByteSlice packs the passed big-endian value into the internal field value
// representation. Only the first 32-bytes are used. As a result, it is up to
// the caller to ensure numbers of the appropriate size are used or the value
// will be truncated.
// SetByteSlice interprets the provided slice as a 256-bit big-endian unsigned
// integer (meaning it is truncated to the first 32 bytes), packs it into the
// internal field value representation, and returns the updated field value.
//
// Note that since passing a slice with more than 32 bytes is truncated, it is
// possible that the truncated value is less than the field prime. It is up to
// the caller to decide whether it needs to provide numbers of the appropriate
// size or if it is acceptable to use this function with the described
// truncation behavior.
//
// The field value is returned to support chaining. This enables syntax like:
// f := new(fieldVal).SetByteSlice(byteSlice)
func (f *fieldVal) SetByteSlice(b []byte) *fieldVal {
var b32 [32]byte
for i := 0; i < len(b); i++ {
if i < 32 {
b32[i+(32-len(b))] = b[i]
}
if len(b) > 32 {
b = b[:32]
}
copy(b32[32-len(b):], b)
return f.SetBytes(&b32)
}

View file

@ -7,6 +7,7 @@ package btcec
import (
"crypto/rand"
"encoding/hex"
"fmt"
"reflect"
"testing"
@ -965,3 +966,156 @@ func testSqrt(t *testing.T, test sqrtTest) {
}
}
}
// TestFieldSetBytes ensures that setting a field value to a 256-bit big-endian
// unsigned integer via both the slice and array methods works as expected for
// edge cases. Random cases are tested via the various other tests.
func TestFieldSetBytes(t *testing.T) {
tests := []struct {
name string // test description
in string // hex encoded test value
expected [10]uint32 // expected raw ints
}{{
name: "zero",
in: "00",
expected: [10]uint32{0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
}, {
name: "field prime",
in: "fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f",
expected: [10]uint32{
0x03fffc2f, 0x03ffffbf, 0x03ffffff, 0x03ffffff, 0x03ffffff,
0x03ffffff, 0x03ffffff, 0x03ffffff, 0x03ffffff, 0x003fffff,
},
}, {
name: "field prime - 1",
in: "fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2e",
expected: [10]uint32{
0x03fffc2e, 0x03ffffbf, 0x03ffffff, 0x03ffffff, 0x03ffffff,
0x03ffffff, 0x03ffffff, 0x03ffffff, 0x03ffffff, 0x003fffff,
},
}, {
name: "field prime + 1 (overflow in word zero)",
in: "fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc30",
expected: [10]uint32{
0x03fffc30, 0x03ffffbf, 0x03ffffff, 0x03ffffff, 0x03ffffff,
0x03ffffff, 0x03ffffff, 0x03ffffff, 0x03ffffff, 0x003fffff,
},
}, {
name: "field prime first 32 bits",
in: "fffffc2f",
expected: [10]uint32{
0x03fffc2f, 0x00000003f, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
},
}, {
name: "field prime word zero",
in: "03fffc2f",
expected: [10]uint32{
0x03fffc2f, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
},
}, {
name: "field prime first 64 bits",
in: "fffffffefffffc2f",
expected: [10]uint32{
0x03fffc2f, 0x03ffffbf, 0x00000fff, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
},
}, {
name: "field prime word zero and one",
in: "0ffffefffffc2f",
expected: [10]uint32{
0x03fffc2f, 0x03ffffbf, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
},
}, {
name: "field prime first 96 bits",
in: "fffffffffffffffefffffc2f",
expected: [10]uint32{
0x03fffc2f, 0x03ffffbf, 0x03ffffff, 0x0003ffff, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
},
}, {
name: "field prime word zero, one, and two",
in: "3ffffffffffefffffc2f",
expected: [10]uint32{
0x03fffc2f, 0x03ffffbf, 0x03ffffff, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
},
}, {
name: "overflow in word one (prime + 1<<26)",
in: "ffffffffffffffffffffffffffffffffffffffffffffffffffffffff03fffc2f",
expected: [10]uint32{
0x03fffc2f, 0x03ffffc0, 0x03ffffff, 0x03ffffff, 0x03ffffff,
0x03ffffff, 0x03ffffff, 0x03ffffff, 0x03ffffff, 0x003fffff,
},
}, {
name: "(field prime - 1) * 2 NOT mod P, truncated >32 bytes",
in: "01fffffffffffffffffffffffffffffffffffffffffffffffffffffffdfffff85c",
expected: [10]uint32{
0x01fffff8, 0x03ffffff, 0x03ffffff, 0x03ffffff, 0x03ffffff,
0x03ffffff, 0x03ffffff, 0x03ffffff, 0x03ffffff, 0x00007fff,
},
}, {
name: "2^256 - 1",
in: "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
expected: [10]uint32{
0x03ffffff, 0x03ffffff, 0x03ffffff, 0x03ffffff, 0x03ffffff,
0x03ffffff, 0x03ffffff, 0x03ffffff, 0x03ffffff, 0x003fffff,
},
}, {
name: "alternating bits",
in: "a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5",
expected: [10]uint32{
0x01a5a5a5, 0x01696969, 0x025a5a5a, 0x02969696, 0x01a5a5a5,
0x01696969, 0x025a5a5a, 0x02969696, 0x01a5a5a5, 0x00296969,
},
}, {
name: "alternating bits 2",
in: "5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a",
expected: [10]uint32{
0x025a5a5a, 0x02969696, 0x01a5a5a5, 0x01696969, 0x025a5a5a,
0x02969696, 0x01a5a5a5, 0x01696969, 0x025a5a5a, 0x00169696,
},
}}
for _, test := range tests {
inBytes := hexToBytes(test.in)
// Ensure setting the bytes via the slice method works as expected.
var f fieldVal
f.SetByteSlice(inBytes)
if !reflect.DeepEqual(f.n, test.expected) {
t.Errorf("%s: unexpected result\ngot: %x\nwant: %x", test.name, f.n,
test.expected)
continue
}
// Ensure setting the bytes via the array method works as expected.
var f2 fieldVal
var b32 [32]byte
truncatedInBytes := inBytes
if len(truncatedInBytes) > 32 {
truncatedInBytes = truncatedInBytes[:32]
}
copy(b32[32-len(truncatedInBytes):], truncatedInBytes)
f2.SetBytes(&b32)
if !reflect.DeepEqual(f2.n, test.expected) {
t.Errorf("%s: unexpected result\ngot: %x\nwant: %x", test.name,
f2.n, test.expected)
continue
}
}
}
// hexToBytes converts the passed hex string into bytes and will panic if there
// is an error. This is only provided for the hard-coded constants so errors in
// the source code can be detected. It will only (and must only) be called with
// hard-coded values.
func hexToBytes(s string) []byte {
b, err := hex.DecodeString(s)
if err != nil {
panic("invalid hex in source file: " + s)
}
return b
}

View file

@ -5,6 +5,7 @@
// This file is ignored during the regular build due to the following build tag.
// It is called by go generate and used to automatically generate pre-computed
// tables used to accelerate operations.
//go:build ignore
// +build ignore
package main
@ -17,7 +18,7 @@ import (
"log"
"os"
"github.com/btcsuite/btcd/btcec"
"github.com/lbryio/lbcd/btcec"
)
func main() {

View file

@ -4,6 +4,7 @@
// This file is ignored during the regular build due to the following build tag.
// This build tag is set during go generate.
//go:build gensecp256k1
// +build gensecp256k1
package btcec

View file

@ -12,7 +12,7 @@ import (
"strings"
)
//go:generate go run -tags gensecp256k1 genprecomps.go
//go:rm -f gensecp256k1.go; generate go run -tags gensecp256k1 genprecomps.go
// loadS256BytePoints decompresses and deserializes the pre-computed byte points
// used to accelerate scalar base multiplication for the secp256k1 curve. This

View file

@ -232,11 +232,11 @@ func TestPubKeys(t *testing.T) {
var pkStr []byte
switch test.format {
case pubkeyUncompressed:
pkStr = (*PublicKey)(pk).SerializeUncompressed()
pkStr = pk.SerializeUncompressed()
case pubkeyCompressed:
pkStr = (*PublicKey)(pk).SerializeCompressed()
pkStr = pk.SerializeCompressed()
case pubkeyHybrid:
pkStr = (*PublicKey)(pk).SerializeHybrid()
pkStr = pk.SerializeHybrid()
}
if !bytes.Equal(test.key, pkStr) {
t.Errorf("%s pubkey: serialized keys do not match.",

View file

@ -284,6 +284,25 @@ func hashToInt(hash []byte, c elliptic.Curve) *big.Int {
// format and thus we match bitcoind's behaviour here.
func recoverKeyFromSignature(curve *KoblitzCurve, sig *Signature, msg []byte,
iter int, doChecks bool) (*PublicKey, error) {
// Parse and validate the R and S signature components.
//
// Fail if r and s are not in [1, N-1].
if sig.R.Cmp(curve.Params().N) != -1 {
return nil, errors.New("signature R is >= curve order")
}
if sig.R.Sign() == 0 {
return nil, errors.New("signature R is 0")
}
if sig.S.Cmp(curve.Params().N) != -1 {
return nil, errors.New("signature S is >= curve order")
}
if sig.S.Sign() == 0 {
return nil, errors.New("signature S is 0")
}
// 1.1 x = (n * i) + r
Rx := new(big.Int).Mul(curve.Params().N,
new(big.Int).SetInt64(int64(iter/2)))
@ -334,6 +353,10 @@ func recoverKeyFromSignature(curve *KoblitzCurve, sig *Signature, msg []byte,
// step to prevent the jacobian conversion back and forth.
Qx, Qy := curve.Add(sRx, sRy, minuseGx, minuseGy)
if Qx.Sign() == 0 && Qy.Sign() == 0 {
return nil, errors.New("point (Qx, Qy) equals the point at infinity")
}
return &PublicKey{
Curve: curve,
X: Qx,
@ -393,7 +416,7 @@ func SignCompact(curve *KoblitzCurve, key *PrivateKey,
// RecoverCompact verifies the compact signature "signature" of "hash" for the
// Koblitz curve in "curve". If the signature matches then the recovered public
// key will be returned as well as a boolen if the original key was compressed
// key will be returned as well as a boolean if the original key was compressed
// or not, else an error will be returned.
func RecoverCompact(curve *KoblitzCurve, signature,
hash []byte) (*PublicKey, bool, error) {

View file

@ -464,8 +464,7 @@ func TestSignatureSerialize(t *testing.T) {
func testSignCompact(t *testing.T, tag string, curve *KoblitzCurve,
data []byte, isCompressed bool) {
tmp, _ := NewPrivateKey(curve)
priv := (*PrivateKey)(tmp)
priv, _ := NewPrivateKey(curve)
hashed := []byte("testing")
sig, err := SignCompact(curve, priv, hashed, isCompressed)
@ -550,12 +549,52 @@ var recoveryTests = []struct {
sig: "0100b1693892219d736caba55bdb67216e485557ea6b6af75f37096c9aa6a5a75f00b940b1d03b21e36b0e47e79769f095fe2ab855bd91e3a38756b7d75a9c4549",
err: fmt.Errorf("invalid square root"),
},
{
// Point at infinity recovered
msg: "6b8d2c81b11b2d699528dde488dbdf2f94293d0d33c32e347f255fa4a6c1f0a9",
sig: "0079be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f817986b8d2c81b11b2d699528dde488dbdf2f94293d0d33c32e347f255fa4a6c1f0a9",
err: fmt.Errorf("point (Qx, Qy) equals the point at infinity"),
},
{
// Low R and S values.
msg: "ba09edc1275a285fb27bfe82c4eea240a907a0dbaf9e55764b8f318c37d5974f",
sig: "00000000000000000000000000000000000000000000000000000000000000002c0000000000000000000000000000000000000000000000000000000000000004",
pub: "04A7640409AA2083FDAD38B2D8DE1263B2251799591D840653FB02DBBA503D7745FCB83D80E08A1E02896BE691EA6AFFB8A35939A646F1FC79052A744B1C82EDC3",
},
{
// Zero R value
//
// Test case contributed by Ethereum Swarm: GH-1651
msg: "3060d2c77c1e192d62ad712fb400e04e6f779914a6876328ff3b213fa85d2012",
sig: "65000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000037a3",
err: fmt.Errorf("signature R is 0"),
},
{
// Zero R value
//
// Test case contributed by Ethereum Swarm: GH-1651
msg: "2bcebac60d8a78e520ae81c2ad586792df495ed429bd730dcd897b301932d054",
sig: "060000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000007c",
err: fmt.Errorf("signature R is 0"),
},
{
// R = N (curve order of secp256k1)
msg: "2bcebac60d8a78e520ae81c2ad586792df495ed429bd730dcd897b301932d054",
sig: "65fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd036414100000000000000000000000000000000000000000000000000000000000037a3",
err: fmt.Errorf("signature R is >= curve order"),
},
{
// Zero S value
msg: "ce0677bb30baa8cf067c88db9811f4333d131bf8bcf12fe7065d211dce971008",
sig: "0190f27b8b488db00b00606796d2987f6a5f59ae62ea05effe84fef5b8b0e549980000000000000000000000000000000000000000000000000000000000000000",
err: fmt.Errorf("signature S is 0"),
},
{
// S = N (curve order of secp256k1)
msg: "ce0677bb30baa8cf067c88db9811f4333d131bf8bcf12fe7065d211dce971008",
sig: "0190f27b8b488db00b00606796d2987f6a5f59ae62ea05effe84fef5b8b0e54998fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141",
err: fmt.Errorf("signature S is >= curve order"),
},
}
func TestRecoverCompact(t *testing.T) {

View file

@ -1,70 +1,8 @@
btcjson
=======
[![Build Status](https://travis-ci.org/btcsuite/btcd.png?branch=master)](https://travis-ci.org/btcsuite/btcd)
[![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](http://godoc.org/github.com/btcsuite/btcd/btcjson)
Package btcjson implements concrete types for marshalling to and from the
bitcoin JSON-RPC API. A comprehensive suite of tests is provided to ensure
proper functionality.
Although this package was primarily written for the btcsuite, it has
intentionally been designed so it can be used as a standalone package for any
projects needing to marshal to and from bitcoin JSON-RPC requests and responses.
Note that although it's possible to use this package directly to implement an
RPC client, it is not recommended since it is only intended as an infrastructure
package. Instead, RPC clients should use the
[btcrpcclient](https://github.com/btcsuite/btcrpcclient) package which provides
a full blown RPC client with many features such as automatic connection
management, websocket support, automatic notification re-registration on
reconnect, and conversion from the raw underlying RPC types (strings, floats,
ints, etc) to higher-level types with many nice and useful properties.
## Installation and Updating
```bash
$ go get -u github.com/btcsuite/btcd/btcjson
```
## Examples
* [Marshal Command](http://godoc.org/github.com/btcsuite/btcd/btcjson#example-MarshalCmd)
Demonstrates how to create and marshal a command into a JSON-RPC request.
* [Unmarshal Command](http://godoc.org/github.com/btcsuite/btcd/btcjson#example-UnmarshalCmd)
Demonstrates how to unmarshal a JSON-RPC request and then unmarshal the
concrete request into a concrete command.
* [Marshal Response](http://godoc.org/github.com/btcsuite/btcd/btcjson#example-MarshalResponse)
Demonstrates how to marshal a JSON-RPC response.
* [Unmarshal Response](http://godoc.org/github.com/btcsuite/btcd/btcjson#example-package--UnmarshalResponse)
Demonstrates how to unmarshal a JSON-RPC response and then unmarshal the
result field in the response to a concrete type.
## GPG Verification Key
All official release tags are signed by Conformal so users can ensure the code
has not been tampered with and is coming from the btcsuite developers. To
verify the signature perform the following:
- Download the public key from the Conformal website at
https://opensource.conformal.com/GIT-GPG-KEY-conformal.txt
- Import the public key into your GPG keyring:
```bash
gpg --import GIT-GPG-KEY-conformal.txt
```
- Verify the release tag with the following command where `TAG_NAME` is a
placeholder for the specific tag:
```bash
git tag -v TAG_NAME
```
## License
Package btcjson is licensed under the [copyfree](http://copyfree.org) ISC
License.

View file

@ -59,6 +59,23 @@ func NewDebugLevelCmd(levelSpec string) *DebugLevelCmd {
}
}
// GenerateToAddressCmd defines the generatetoaddress JSON-RPC command.
type GenerateToAddressCmd struct {
NumBlocks int64
Address string
MaxTries *int64 `jsonrpcdefault:"1000000"`
}
// NewGenerateToAddressCmd returns a new instance which can be used to issue a
// generatetoaddress JSON-RPC command.
func NewGenerateToAddressCmd(numBlocks int64, address string, maxTries *int64) *GenerateToAddressCmd {
return &GenerateToAddressCmd{
NumBlocks: numBlocks,
Address: address,
MaxTries: maxTries,
}
}
// GenerateCmd defines the generate JSON-RPC command.
type GenerateCmd struct {
NumBlocks uint32
@ -131,6 +148,7 @@ func init() {
MustRegisterCmd("debuglevel", (*DebugLevelCmd)(nil), flags)
MustRegisterCmd("node", (*NodeCmd)(nil), flags)
MustRegisterCmd("generate", (*GenerateCmd)(nil), flags)
MustRegisterCmd("generatetoaddress", (*GenerateToAddressCmd)(nil), flags)
MustRegisterCmd("getbestblock", (*GetBestBlockCmd)(nil), flags)
MustRegisterCmd("getcurrentnet", (*GetCurrentNetCmd)(nil), flags)
MustRegisterCmd("getheaders", (*GetHeadersCmd)(nil), flags)

View file

@ -12,7 +12,7 @@ import (
"reflect"
"testing"
"github.com/btcsuite/btcd/btcjson"
"github.com/lbryio/lbcd/btcjson"
)
// TestBtcdExtCmds tests all of the btcd extended commands marshal and unmarshal
@ -114,6 +114,24 @@ func TestBtcdExtCmds(t *testing.T) {
NumBlocks: 1,
},
},
{
name: "generatetoaddress",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("generatetoaddress", 1, "1Address")
},
staticCmd: func() interface{} {
return btcjson.NewGenerateToAddressCmd(1, "1Address", nil)
},
marshalled: `{"jsonrpc":"1.0","method":"generatetoaddress","params":[1,"1Address"],"id":1}`,
unmarshalled: &btcjson.GenerateToAddressCmd{
NumBlocks: 1,
Address: "1Address",
MaxTries: func() *int64 {
var i int64 = 1000000
return &i
}(),
},
},
{
name: "getbestblock",
newCmd: func() (interface{}, error) {
@ -193,7 +211,7 @@ func TestBtcdExtCmds(t *testing.T) {
for i, test := range tests {
// Marshal the command as created by the new static command
// creation function.
marshalled, err := btcjson.MarshalCmd(testID, test.staticCmd())
marshalled, err := btcjson.MarshalCmd(btcjson.RpcVersion1, testID, test.staticCmd())
if err != nil {
t.Errorf("MarshalCmd #%d (%s) unexpected error: %v", i,
test.name, err)
@ -217,7 +235,7 @@ func TestBtcdExtCmds(t *testing.T) {
// Marshal the command as created by the generic new command
// creation function.
marshalled, err = btcjson.MarshalCmd(testID, cmd)
marshalled, err = btcjson.MarshalCmd(btcjson.RpcVersion1, testID, cmd)
if err != nil {
t.Errorf("MarshalCmd #%d (%s) unexpected error: %v", i,
test.name, err)

View file

@ -9,7 +9,7 @@ import (
"encoding/json"
"testing"
"github.com/btcsuite/btcd/btcjson"
"github.com/lbryio/lbcd/btcjson"
)
// TestBtcdExtCustomResults ensures any results that have custom marshalling

View file

@ -11,7 +11,7 @@ import (
"reflect"
"testing"
"github.com/btcsuite/btcd/btcjson"
"github.com/lbryio/lbcd/btcjson"
)
// TestBtcWalletExtCmds tests all of the btcwallet extended commands marshal and
@ -145,7 +145,7 @@ func TestBtcWalletExtCmds(t *testing.T) {
for i, test := range tests {
// Marshal the command as created by the new static command
// creation function.
marshalled, err := btcjson.MarshalCmd(testID, test.staticCmd())
marshalled, err := btcjson.MarshalCmd(btcjson.RpcVersion1, testID, test.staticCmd())
if err != nil {
t.Errorf("MarshalCmd #%d (%s) unexpected error: %v", i,
test.name, err)
@ -169,7 +169,7 @@ func TestBtcWalletExtCmds(t *testing.T) {
// Marshal the command as created by the generic new command
// creation function.
marshalled, err = btcjson.MarshalCmd(testID, cmd)
marshalled, err = btcjson.MarshalCmd(btcjson.RpcVersion1, testID, cmd)
if err != nil {
t.Errorf("MarshalCmd #%d (%s) unexpected error: %v", i,
test.name, err)

View file

@ -8,10 +8,12 @@
package btcjson
import (
"encoding/hex"
"encoding/json"
"fmt"
"reflect"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/wire"
)
// AddNodeSubCmd defines the type used in the addnode JSON-RPC command for the
@ -46,6 +48,15 @@ func NewAddNodeCmd(addr string, subCmd AddNodeSubCmd) *AddNodeCmd {
}
}
// ClearBannedCmd defines the clearbanned JSON-RPC command.
type ClearBannedCmd struct{}
// NewClearBannedCmd returns a new instance which can be used to issue an clearbanned
// JSON-RPC command.
func NewClearBannedCmd() *ClearBannedCmd {
return &ClearBannedCmd{}
}
// TransactionInput represents the inputs to a transaction. Specifically a
// transaction hash and output number pair.
type TransactionInput struct {
@ -56,20 +67,25 @@ type TransactionInput struct {
// CreateRawTransactionCmd defines the createrawtransaction JSON-RPC command.
type CreateRawTransactionCmd struct {
Inputs []TransactionInput
Amounts map[string]float64 `jsonrpcusage:"{\"address\":amount,...}"` // In BTC
Outputs map[string]interface{} `jsonrpcusage:"{\"address\":amount, \"data\":\"hex\", ...}"`
LockTime *int64
}
// NewCreateRawTransactionCmd returns a new instance which can be used to issue
// a createrawtransaction JSON-RPC command.
//
// Amounts are in BTC.
func NewCreateRawTransactionCmd(inputs []TransactionInput, amounts map[string]float64,
// Amounts are in BTC. Passing in nil and the empty slice as inputs is equivalent,
// both gets interpreted as the empty slice.
func NewCreateRawTransactionCmd(inputs []TransactionInput, outputs map[string]interface{},
lockTime *int64) *CreateRawTransactionCmd {
// to make sure we're serializing this to the empty list and not null, we
// explicitly initialize the list
if inputs == nil {
inputs = []TransactionInput{}
}
return &CreateRawTransactionCmd{
Inputs: inputs,
Amounts: amounts,
Outputs: outputs,
LockTime: lockTime,
}
}
@ -100,6 +116,65 @@ func NewDecodeScriptCmd(hexScript string) *DecodeScriptCmd {
}
}
// DeriveAddressesCmd defines the deriveaddresses JSON-RPC command.
type DeriveAddressesCmd struct {
Descriptor string
Range *DescriptorRange
}
// NewDeriveAddressesCmd returns a new instance which can be used to issue a
// deriveaddresses JSON-RPC command.
func NewDeriveAddressesCmd(descriptor string, descriptorRange *DescriptorRange) *DeriveAddressesCmd {
return &DeriveAddressesCmd{
Descriptor: descriptor,
Range: descriptorRange,
}
}
// ChangeType defines the different output types to use for the change address
// of a transaction built by the node.
type ChangeType string
var (
// ChangeTypeLegacy indicates a P2PKH change address type.
ChangeTypeLegacy ChangeType = "legacy"
// ChangeTypeP2SHSegWit indicates a P2WPKH-in-P2SH change address type.
ChangeTypeP2SHSegWit ChangeType = "p2sh-segwit"
// ChangeTypeBech32 indicates a P2WPKH change address type.
ChangeTypeBech32 ChangeType = "bech32"
)
// FundRawTransactionOpts are the different options that can be passed to rawtransaction
type FundRawTransactionOpts struct {
ChangeAddress *string `json:"changeAddress,omitempty"`
ChangePosition *int `json:"changePosition,omitempty"`
ChangeType *ChangeType `json:"change_type,omitempty"`
IncludeWatching *bool `json:"includeWatching,omitempty"`
LockUnspents *bool `json:"lockUnspents,omitempty"`
FeeRate *float64 `json:"feeRate,omitempty"` // BTC/kB
SubtractFeeFromOutputs []int `json:"subtractFeeFromOutputs,omitempty"`
Replaceable *bool `json:"replaceable,omitempty"`
ConfTarget *int `json:"conf_target,omitempty"`
EstimateMode *EstimateSmartFeeMode `json:"estimate_mode,omitempty"`
}
// FundRawTransactionCmd defines the fundrawtransaction JSON-RPC command
type FundRawTransactionCmd struct {
HexTx string
Options FundRawTransactionOpts
IsWitness *bool
}
// NewFundRawTransactionCmd returns a new instance which can be used to issue
// a fundrawtransaction JSON-RPC command
func NewFundRawTransactionCmd(serializedTx []byte, opts FundRawTransactionOpts, isWitness *bool) *FundRawTransactionCmd {
return &FundRawTransactionCmd{
HexTx: hex.EncodeToString(serializedTx),
Options: opts,
IsWitness: isWitness,
}
}
// GetAddedNodeInfoCmd defines the getaddednodeinfo JSON-RPC command.
type GetAddedNodeInfoCmd struct {
DNS bool
@ -130,8 +205,7 @@ func NewGetBestBlockHashCmd() *GetBestBlockHashCmd {
// GetBlockCmd defines the getblock JSON-RPC command.
type GetBlockCmd struct {
Hash string
Verbose *bool `jsonrpcdefault:"true"`
VerboseTx *bool `jsonrpcdefault:"false"`
Verbosity *int `jsonrpcdefault:"1"`
}
// NewGetBlockCmd returns a new instance which can be used to issue a getblock
@ -139,11 +213,10 @@ type GetBlockCmd struct {
//
// The parameters which are pointers indicate they are optional. Passing nil
// for optional parameters will use the default value.
func NewGetBlockCmd(hash string, verbose, verboseTx *bool) *GetBlockCmd {
func NewGetBlockCmd(hash string, verbosity *int) *GetBlockCmd {
return &GetBlockCmd{
Hash: hash,
Verbose: verbose,
VerboseTx: verboseTx,
Verbosity: verbosity,
}
}
@ -165,6 +238,33 @@ func NewGetBlockCountCmd() *GetBlockCountCmd {
return &GetBlockCountCmd{}
}
// FilterTypeName defines the type used in the getblockfilter JSON-RPC command for the
// filter type field.
type FilterTypeName string
const (
// FilterTypeBasic is the basic filter type defined in BIP0158.
FilterTypeBasic FilterTypeName = "basic"
)
// GetBlockFilterCmd defines the getblockfilter JSON-RPC command.
type GetBlockFilterCmd struct {
BlockHash string // The hash of the block
FilterType *FilterTypeName // The type name of the filter, default=basic
}
// NewGetBlockFilterCmd returns a new instance which can be used to issue a
// getblockfilter JSON-RPC command.
//
// The parameters which are pointers indicate they are optional. Passing nil
// for optional parameters will use the default value.
func NewGetBlockFilterCmd(blockHash string, filterType *FilterTypeName) *GetBlockFilterCmd {
return &GetBlockFilterCmd{
BlockHash: blockHash,
FilterType: filterType,
}
}
// GetBlockHashCmd defines the getblockhash JSON-RPC command.
type GetBlockHashCmd struct {
Index int64
@ -193,6 +293,50 @@ func NewGetBlockHeaderCmd(hash string, verbose *bool) *GetBlockHeaderCmd {
}
}
// HashOrHeight defines a type that can be used as hash_or_height value in JSON-RPC commands.
type HashOrHeight struct {
Value interface{}
}
// MarshalJSON implements the json.Marshaler interface
func (h HashOrHeight) MarshalJSON() ([]byte, error) {
return json.Marshal(h.Value)
}
// UnmarshalJSON implements the json.Unmarshaler interface
func (h *HashOrHeight) UnmarshalJSON(data []byte) error {
var unmarshalled interface{}
if err := json.Unmarshal(data, &unmarshalled); err != nil {
return err
}
switch v := unmarshalled.(type) {
case float64:
h.Value = int(v)
case string:
h.Value = v
default:
return fmt.Errorf("invalid hash_or_height value: %v", unmarshalled)
}
return nil
}
// GetBlockStatsCmd defines the getblockstats JSON-RPC command.
type GetBlockStatsCmd struct {
HashOrHeight HashOrHeight
Stats *[]string
}
// NewGetBlockStatsCmd returns a new instance which can be used to issue a
// getblockstats JSON-RPC command. Either height or hash must be specified.
func NewGetBlockStatsCmd(hashOrHeight HashOrHeight, stats *[]string) *GetBlockStatsCmd {
return &GetBlockStatsCmd{
HashOrHeight: hashOrHeight,
Stats: stats,
}
}
// TemplateRequest is a request object as defined in BIP22
// (https://en.bitcoin.it/wiki/BIP_0022), it is optionally provided as an
// pointer argument to GetBlockTemplateCmd.
@ -216,6 +360,10 @@ type TemplateRequest struct {
// "proposal".
Data string `json:"data,omitempty"`
WorkID string `json:"workid,omitempty"`
// list of supported softfork deployments, by name
// Ref: https://en.bitcoin.it/wiki/BIP_0009#getblocktemplate_changes.
Rules []string `json:"rules,omitempty"`
}
// convertTemplateRequestField potentially converts the provided value as
@ -321,6 +469,24 @@ func NewGetChainTipsCmd() *GetChainTipsCmd {
return &GetChainTipsCmd{}
}
// GetChainTxStatsCmd defines the getchaintxstats JSON-RPC command.
type GetChainTxStatsCmd struct {
NBlocks *int32
BlockHash *string
}
// NewGetChainTxStatsCmd returns a new instance which can be used to issue a
// getchaintxstats JSON-RPC command.
//
// The parameters which are pointers indicate they are optional. Passing nil
// for optional parameters will use the default value.
func NewGetChainTxStatsCmd(nBlocks *int32, blockHash *string) *GetChainTxStatsCmd {
return &GetChainTxStatsCmd{
NBlocks: nBlocks,
BlockHash: blockHash,
}
}
// GetConnectionCountCmd defines the getconnectioncount JSON-RPC command.
type GetConnectionCountCmd struct{}
@ -330,6 +496,19 @@ func NewGetConnectionCountCmd() *GetConnectionCountCmd {
return &GetConnectionCountCmd{}
}
// GetDescriptorInfoCmd defines the getdescriptorinfo JSON-RPC command.
type GetDescriptorInfoCmd struct {
Descriptor string
}
// NewGetDescriptorInfoCmd returns a new instance which can be used to issue a
// getdescriptorinfo JSON-RPC command.
func NewGetDescriptorInfoCmd(descriptor string) *GetDescriptorInfoCmd {
return &GetDescriptorInfoCmd{
Descriptor: descriptor,
}
}
// GetDifficultyCmd defines the getdifficulty JSON-RPC command.
type GetDifficultyCmd struct{}
@ -433,6 +612,22 @@ func NewGetNetworkHashPSCmd(numBlocks, height *int) *GetNetworkHashPSCmd {
}
}
// GetNodeAddressesCmd defines the getnodeaddresses JSON-RPC command.
type GetNodeAddressesCmd struct {
Count *int32 `jsonrpcdefault:"1"`
}
// NewGetNodeAddressesCmd returns a new instance which can be used to issue a
// getnodeaddresses JSON-RPC command.
//
// The parameters which are pointers indicate they are optional. Passing nil
// for optional parameters will use the default value.
func NewGetNodeAddressesCmd(count *int32) *GetNodeAddressesCmd {
return &GetNodeAddressesCmd{
Count: count,
}
}
// GetPeerInfoCmd defines the getpeerinfo JSON-RPC command.
type GetPeerInfoCmd struct{}
@ -464,7 +659,7 @@ func NewGetRawMempoolCmd(verbose *bool) *GetRawMempoolCmd {
// Core even though it really should be a bool.
type GetRawTransactionCmd struct {
Txid string
Verbose *int `jsonrpcdefault:"0"`
Verbose *bool `jsonrpcdefault:"false"`
}
// NewGetRawTransactionCmd returns a new instance which can be used to issue a
@ -472,7 +667,7 @@ type GetRawTransactionCmd struct {
//
// The parameters which are pointers indicate they are optional. Passing nil
// for optional parameters will use the default value.
func NewGetRawTransactionCmd(txHash string, verbose *int) *GetRawTransactionCmd {
func NewGetRawTransactionCmd(txHash string, verbose *bool) *GetRawTransactionCmd {
return &GetRawTransactionCmd{
Txid: txHash,
Verbose: verbose,
@ -571,6 +766,15 @@ func NewInvalidateBlockCmd(blockHash string) *InvalidateBlockCmd {
}
}
// ListBannedCmd defines the listbanned JSON-RPC command.
type ListBannedCmd struct{}
// NewListBannedCmd returns a new instance which can be used to issue a listbanned
// JSON-RPC command.
func NewListBannedCmd() *ListBannedCmd {
return &ListBannedCmd{}
}
// PingCmd defines the ping JSON-RPC command.
type PingCmd struct{}
@ -634,11 +838,60 @@ func NewSearchRawTransactionsCmd(address string, verbose, skip, count *int, vinE
}
}
// AllowHighFeesOrMaxFeeRate defines a type that can either be the legacy
// allowhighfees boolean field or the new maxfeerate int field.
type AllowHighFeesOrMaxFeeRate struct {
Value interface{}
}
// String returns the string representation of this struct, used for printing
// the marshaled default value in the help text.
func (a AllowHighFeesOrMaxFeeRate) String() string {
b, _ := a.MarshalJSON()
return string(b)
}
// MarshalJSON implements the json.Marshaler interface
func (a AllowHighFeesOrMaxFeeRate) MarshalJSON() ([]byte, error) {
// The default value is false which only works with the legacy versions.
if a.Value == nil ||
(reflect.ValueOf(a.Value).Kind() == reflect.Ptr &&
reflect.ValueOf(a.Value).IsNil()) {
return json.Marshal(false)
}
return json.Marshal(a.Value)
}
// UnmarshalJSON implements the json.Unmarshaler interface
func (a *AllowHighFeesOrMaxFeeRate) UnmarshalJSON(data []byte) error {
if len(data) == 0 {
return nil
}
var unmarshalled interface{}
if err := json.Unmarshal(data, &unmarshalled); err != nil {
return err
}
switch v := unmarshalled.(type) {
case bool:
a.Value = Bool(v)
case float64:
a.Value = Int32(int32(v))
default:
return fmt.Errorf("invalid allowhighfees or maxfeerate value: "+
"%v", unmarshalled)
}
return nil
}
// SendRawTransactionCmd defines the sendrawtransaction JSON-RPC command.
type SendRawTransactionCmd struct {
HexTx string
AllowHighFees *bool `jsonrpcdefault:"false"`
MaxFeeRate *int32
HexTx string
FeeSetting *AllowHighFeesOrMaxFeeRate `jsonrpcdefault:"false"`
}
// NewSendRawTransactionCmd returns a new instance which can be used to issue a
@ -648,8 +901,10 @@ type SendRawTransactionCmd struct {
// for optional parameters will use the default value.
func NewSendRawTransactionCmd(hexTx string, allowHighFees *bool) *SendRawTransactionCmd {
return &SendRawTransactionCmd{
HexTx: hexTx,
AllowHighFees: allowHighFees,
HexTx: hexTx,
FeeSetting: &AllowHighFeesOrMaxFeeRate{
Value: allowHighFees,
},
}
}
@ -659,8 +914,43 @@ func NewSendRawTransactionCmd(hexTx string, allowHighFees *bool) *SendRawTransac
// A 0 maxFeeRate indicates that a maximum fee rate won't be enforced.
func NewBitcoindSendRawTransactionCmd(hexTx string, maxFeeRate int32) *SendRawTransactionCmd {
return &SendRawTransactionCmd{
HexTx: hexTx,
MaxFeeRate: &maxFeeRate,
HexTx: hexTx,
FeeSetting: &AllowHighFeesOrMaxFeeRate{
Value: &maxFeeRate,
},
}
}
// SetBanSubCmd defines the type used in the setban JSON-RPC command for the
// sub command field.
type SetBanSubCmd string
const (
// SBAdd indicates the specified host should be added as a persistent
// peer.
SBAdd SetBanSubCmd = "add"
// SBRemove indicates the specified peer should be removed.
SBRemove SetBanSubCmd = "remove"
)
// SetBanCmd defines the setban JSON-RPC command.
type SetBanCmd struct {
Addr string
SubCmd SetBanSubCmd `jsonrpcusage:"\"add|remove\""`
BanTime *int `jsonrpcdefault:"0"`
Absolute *bool `jsonrpcdefault:"false"`
}
// NewSetBanCmd returns a new instance which can be used to issue an setban
// JSON-RPC command.
func NewSetBanCmd(addr string, subCmd SetBanSubCmd, banTime *int,
absolute *bool) *SetBanCmd {
return &SetBanCmd{
Addr: addr,
SubCmd: subCmd,
BanTime: banTime,
Absolute: absolute,
}
}
@ -682,6 +972,24 @@ func NewSetGenerateCmd(generate bool, genProcLimit *int) *SetGenerateCmd {
}
}
// SignMessageWithPrivKeyCmd defines the signmessagewithprivkey JSON-RPC command.
type SignMessageWithPrivKeyCmd struct {
PrivKey string // base 58 Wallet Import format private key
Message string // Message to sign
}
// NewSignMessageWithPrivKey returns a new instance which can be used to issue a
// signmessagewithprivkey JSON-RPC command.
//
// The first parameter is a private key in base 58 Wallet Import format.
// The second parameter is the message to sign.
func NewSignMessageWithPrivKey(privKey, message string) *SignMessageWithPrivKeyCmd {
return &SignMessageWithPrivKeyCmd{
PrivKey: privKey,
Message: message,
}
}
// StopCmd defines the stop JSON-RPC command.
type StopCmd struct{}
@ -793,18 +1101,24 @@ func init() {
MustRegisterCmd("createrawtransaction", (*CreateRawTransactionCmd)(nil), flags)
MustRegisterCmd("decoderawtransaction", (*DecodeRawTransactionCmd)(nil), flags)
MustRegisterCmd("decodescript", (*DecodeScriptCmd)(nil), flags)
MustRegisterCmd("deriveaddresses", (*DeriveAddressesCmd)(nil), flags)
MustRegisterCmd("fundrawtransaction", (*FundRawTransactionCmd)(nil), flags)
MustRegisterCmd("getaddednodeinfo", (*GetAddedNodeInfoCmd)(nil), flags)
MustRegisterCmd("getbestblockhash", (*GetBestBlockHashCmd)(nil), flags)
MustRegisterCmd("getblock", (*GetBlockCmd)(nil), flags)
MustRegisterCmd("getblockchaininfo", (*GetBlockChainInfoCmd)(nil), flags)
MustRegisterCmd("getblockcount", (*GetBlockCountCmd)(nil), flags)
MustRegisterCmd("getblockfilter", (*GetBlockFilterCmd)(nil), flags)
MustRegisterCmd("getblockhash", (*GetBlockHashCmd)(nil), flags)
MustRegisterCmd("getblockheader", (*GetBlockHeaderCmd)(nil), flags)
MustRegisterCmd("getblockstats", (*GetBlockStatsCmd)(nil), flags)
MustRegisterCmd("getblocktemplate", (*GetBlockTemplateCmd)(nil), flags)
MustRegisterCmd("getcfilter", (*GetCFilterCmd)(nil), flags)
MustRegisterCmd("getcfilterheader", (*GetCFilterHeaderCmd)(nil), flags)
MustRegisterCmd("getchaintips", (*GetChainTipsCmd)(nil), flags)
MustRegisterCmd("getchaintxstats", (*GetChainTxStatsCmd)(nil), flags)
MustRegisterCmd("getconnectioncount", (*GetConnectionCountCmd)(nil), flags)
MustRegisterCmd("getdescriptorinfo", (*GetDescriptorInfoCmd)(nil), flags)
MustRegisterCmd("getdifficulty", (*GetDifficultyCmd)(nil), flags)
MustRegisterCmd("getgenerate", (*GetGenerateCmd)(nil), flags)
MustRegisterCmd("gethashespersec", (*GetHashesPerSecCmd)(nil), flags)
@ -815,7 +1129,11 @@ func init() {
MustRegisterCmd("getnetworkinfo", (*GetNetworkInfoCmd)(nil), flags)
MustRegisterCmd("getnettotals", (*GetNetTotalsCmd)(nil), flags)
MustRegisterCmd("getnetworkhashps", (*GetNetworkHashPSCmd)(nil), flags)
MustRegisterCmd("getnodeaddresses", (*GetNodeAddressesCmd)(nil), flags)
MustRegisterCmd("getpeerinfo", (*GetPeerInfoCmd)(nil), flags)
MustRegisterCmd("listbanned", (*ListBannedCmd)(nil), flags)
MustRegisterCmd("setban", (*SetBanCmd)(nil), flags)
MustRegisterCmd("clearbanned", (*ClearBannedCmd)(nil), flags)
MustRegisterCmd("getrawmempool", (*GetRawMempoolCmd)(nil), flags)
MustRegisterCmd("getrawtransaction", (*GetRawTransactionCmd)(nil), flags)
MustRegisterCmd("gettxout", (*GetTxOutCmd)(nil), flags)
@ -830,6 +1148,7 @@ func init() {
MustRegisterCmd("searchrawtransactions", (*SearchRawTransactionsCmd)(nil), flags)
MustRegisterCmd("sendrawtransaction", (*SendRawTransactionCmd)(nil), flags)
MustRegisterCmd("setgenerate", (*SetGenerateCmd)(nil), flags)
MustRegisterCmd("signmessagewithprivkey", (*SignMessageWithPrivKeyCmd)(nil), flags)
MustRegisterCmd("stop", (*StopCmd)(nil), flags)
MustRegisterCmd("submitblock", (*SubmitBlockCmd)(nil), flags)
MustRegisterCmd("uptime", (*UptimeCmd)(nil), flags)

View file

@ -6,13 +6,14 @@ package btcjson_test
import (
"bytes"
"encoding/hex"
"encoding/json"
"fmt"
"reflect"
"testing"
"github.com/btcsuite/btcd/btcjson"
"github.com/btcsuite/btcd/wire"
"github.com/lbryio/lbcd/btcjson"
"github.com/lbryio/lbcd/wire"
)
// TestChainSvrCmds tests all of the chain server commands marshal and unmarshal
@ -51,13 +52,28 @@ func TestChainSvrCmds(t *testing.T) {
txInputs := []btcjson.TransactionInput{
{Txid: "123", Vout: 1},
}
amounts := map[string]float64{"456": .0123}
return btcjson.NewCreateRawTransactionCmd(txInputs, amounts, nil)
txOutputs := map[string]interface{}{"456": .0123}
return btcjson.NewCreateRawTransactionCmd(txInputs, txOutputs, nil)
},
marshalled: `{"jsonrpc":"1.0","method":"createrawtransaction","params":[[{"txid":"123","vout":1}],{"456":0.0123}],"id":1}`,
unmarshalled: &btcjson.CreateRawTransactionCmd{
Inputs: []btcjson.TransactionInput{{Txid: "123", Vout: 1}},
Amounts: map[string]float64{"456": .0123},
Outputs: map[string]interface{}{"456": .0123},
},
},
{
name: "createrawtransaction - no inputs",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("createrawtransaction", `[]`, `{"456":0.0123}`)
},
staticCmd: func() interface{} {
txOutputs := map[string]interface{}{"456": .0123}
return btcjson.NewCreateRawTransactionCmd(nil, txOutputs, nil)
},
marshalled: `{"jsonrpc":"1.0","method":"createrawtransaction","params":[[],{"456":0.0123}],"id":1}`,
unmarshalled: &btcjson.CreateRawTransactionCmd{
Inputs: []btcjson.TransactionInput{},
Outputs: map[string]interface{}{"456": .0123},
},
},
{
@ -70,17 +86,137 @@ func TestChainSvrCmds(t *testing.T) {
txInputs := []btcjson.TransactionInput{
{Txid: "123", Vout: 1},
}
amounts := map[string]float64{"456": .0123}
return btcjson.NewCreateRawTransactionCmd(txInputs, amounts, btcjson.Int64(12312333333))
txOutputs := map[string]interface{}{"456": .0123}
return btcjson.NewCreateRawTransactionCmd(txInputs, txOutputs, btcjson.Int64(12312333333))
},
marshalled: `{"jsonrpc":"1.0","method":"createrawtransaction","params":[[{"txid":"123","vout":1}],{"456":0.0123},12312333333],"id":1}`,
unmarshalled: &btcjson.CreateRawTransactionCmd{
Inputs: []btcjson.TransactionInput{{Txid: "123", Vout: 1}},
Amounts: map[string]float64{"456": .0123},
Outputs: map[string]interface{}{"456": .0123},
LockTime: btcjson.Int64(12312333333),
},
},
{
name: "createrawtransaction with data",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("createrawtransaction", `[{"txid":"123","vout":1}]`,
`{"data":"6a134920616d204672616374616c456e6372797074"}`)
},
staticCmd: func() interface{} {
txInputs := []btcjson.TransactionInput{
{Txid: "123", Vout: 1},
}
txOutputs := map[string]interface{}{"data": "6a134920616d204672616374616c456e6372797074"}
return btcjson.NewCreateRawTransactionCmd(txInputs, txOutputs, nil)
},
marshalled: `{"jsonrpc":"1.0","method":"createrawtransaction","params":[[{"txid":"123","vout":1}],{"data":"6a134920616d204672616374616c456e6372797074"}],"id":1}`,
unmarshalled: &btcjson.CreateRawTransactionCmd{
Inputs: []btcjson.TransactionInput{{Txid: "123", Vout: 1}},
Outputs: map[string]interface{}{"data": "6a134920616d204672616374616c456e6372797074"},
},
},
{
name: "fundrawtransaction - empty opts",
newCmd: func() (i interface{}, e error) {
return btcjson.NewCmd("fundrawtransaction", "deadbeef", "{}")
},
staticCmd: func() interface{} {
deadbeef, err := hex.DecodeString("deadbeef")
if err != nil {
panic(err)
}
return btcjson.NewFundRawTransactionCmd(deadbeef, btcjson.FundRawTransactionOpts{}, nil)
},
marshalled: `{"jsonrpc":"1.0","method":"fundrawtransaction","params":["deadbeef",{}],"id":1}`,
unmarshalled: &btcjson.FundRawTransactionCmd{
HexTx: "deadbeef",
Options: btcjson.FundRawTransactionOpts{},
IsWitness: nil,
},
},
{
name: "fundrawtransaction - full opts",
newCmd: func() (i interface{}, e error) {
return btcjson.NewCmd("fundrawtransaction", "deadbeef", `{"changeAddress":"bcrt1qeeuctq9wutlcl5zatge7rjgx0k45228cxez655","changePosition":1,"change_type":"legacy","includeWatching":true,"lockUnspents":true,"feeRate":0.7,"subtractFeeFromOutputs":[0],"replaceable":true,"conf_target":8,"estimate_mode":"ECONOMICAL"}`)
},
staticCmd: func() interface{} {
deadbeef, err := hex.DecodeString("deadbeef")
if err != nil {
panic(err)
}
changeAddress := "bcrt1qeeuctq9wutlcl5zatge7rjgx0k45228cxez655"
change := 1
changeType := btcjson.ChangeTypeLegacy
watching := true
lockUnspents := true
feeRate := 0.7
replaceable := true
confTarget := 8
return btcjson.NewFundRawTransactionCmd(deadbeef, btcjson.FundRawTransactionOpts{
ChangeAddress: &changeAddress,
ChangePosition: &change,
ChangeType: &changeType,
IncludeWatching: &watching,
LockUnspents: &lockUnspents,
FeeRate: &feeRate,
SubtractFeeFromOutputs: []int{0},
Replaceable: &replaceable,
ConfTarget: &confTarget,
EstimateMode: &btcjson.EstimateModeEconomical,
}, nil)
},
marshalled: `{"jsonrpc":"1.0","method":"fundrawtransaction","params":["deadbeef",{"changeAddress":"bcrt1qeeuctq9wutlcl5zatge7rjgx0k45228cxez655","changePosition":1,"change_type":"legacy","includeWatching":true,"lockUnspents":true,"feeRate":0.7,"subtractFeeFromOutputs":[0],"replaceable":true,"conf_target":8,"estimate_mode":"ECONOMICAL"}],"id":1}`,
unmarshalled: func() interface{} {
changeAddress := "bcrt1qeeuctq9wutlcl5zatge7rjgx0k45228cxez655"
change := 1
changeType := btcjson.ChangeTypeLegacy
watching := true
lockUnspents := true
feeRate := 0.7
replaceable := true
confTarget := 8
return &btcjson.FundRawTransactionCmd{
HexTx: "deadbeef",
Options: btcjson.FundRawTransactionOpts{
ChangeAddress: &changeAddress,
ChangePosition: &change,
ChangeType: &changeType,
IncludeWatching: &watching,
LockUnspents: &lockUnspents,
FeeRate: &feeRate,
SubtractFeeFromOutputs: []int{0},
Replaceable: &replaceable,
ConfTarget: &confTarget,
EstimateMode: &btcjson.EstimateModeEconomical,
},
IsWitness: nil,
}
}(),
},
{
name: "fundrawtransaction - iswitness",
newCmd: func() (i interface{}, e error) {
return btcjson.NewCmd("fundrawtransaction", "deadbeef", "{}", true)
},
staticCmd: func() interface{} {
deadbeef, err := hex.DecodeString("deadbeef")
if err != nil {
panic(err)
}
t := true
return btcjson.NewFundRawTransactionCmd(deadbeef, btcjson.FundRawTransactionOpts{}, &t)
},
marshalled: `{"jsonrpc":"1.0","method":"fundrawtransaction","params":["deadbeef",{},true],"id":1}`,
unmarshalled: &btcjson.FundRawTransactionCmd{
HexTx: "deadbeef",
Options: btcjson.FundRawTransactionOpts{},
IsWitness: func() *bool {
t := true
return &t
}(),
},
},
{
name: "decoderawtransaction",
newCmd: func() (interface{}, error) {
@ -103,6 +239,51 @@ func TestChainSvrCmds(t *testing.T) {
marshalled: `{"jsonrpc":"1.0","method":"decodescript","params":["00"],"id":1}`,
unmarshalled: &btcjson.DecodeScriptCmd{HexScript: "00"},
},
{
name: "deriveaddresses no range",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("deriveaddresses", "00")
},
staticCmd: func() interface{} {
return btcjson.NewDeriveAddressesCmd("00", nil)
},
marshalled: `{"jsonrpc":"1.0","method":"deriveaddresses","params":["00"],"id":1}`,
unmarshalled: &btcjson.DeriveAddressesCmd{Descriptor: "00"},
},
{
name: "deriveaddresses int range",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd(
"deriveaddresses", "00", btcjson.DescriptorRange{Value: 2})
},
staticCmd: func() interface{} {
return btcjson.NewDeriveAddressesCmd(
"00", &btcjson.DescriptorRange{Value: 2})
},
marshalled: `{"jsonrpc":"1.0","method":"deriveaddresses","params":["00",2],"id":1}`,
unmarshalled: &btcjson.DeriveAddressesCmd{
Descriptor: "00",
Range: &btcjson.DescriptorRange{Value: 2},
},
},
{
name: "deriveaddresses slice range",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd(
"deriveaddresses", "00",
btcjson.DescriptorRange{Value: []int{0, 2}},
)
},
staticCmd: func() interface{} {
return btcjson.NewDeriveAddressesCmd(
"00", &btcjson.DescriptorRange{Value: []int{0, 2}})
},
marshalled: `{"jsonrpc":"1.0","method":"deriveaddresses","params":["00",[0,2]],"id":1}`,
unmarshalled: &btcjson.DeriveAddressesCmd{
Descriptor: "00",
Range: &btcjson.DescriptorRange{Value: []int{0, 2}},
},
},
{
name: "getaddednodeinfo",
newCmd: func() (interface{}, error) {
@ -141,51 +322,58 @@ func TestChainSvrCmds(t *testing.T) {
},
{
name: "getblock",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getblock", "123", btcjson.Int(0))
},
staticCmd: func() interface{} {
return btcjson.NewGetBlockCmd("123", btcjson.Int(0))
},
marshalled: `{"jsonrpc":"1.0","method":"getblock","params":["123",0],"id":1}`,
unmarshalled: &btcjson.GetBlockCmd{
Hash: "123",
Verbosity: btcjson.Int(0),
},
},
{
name: "getblock default verbosity",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getblock", "123")
},
staticCmd: func() interface{} {
return btcjson.NewGetBlockCmd("123", nil, nil)
return btcjson.NewGetBlockCmd("123", nil)
},
marshalled: `{"jsonrpc":"1.0","method":"getblock","params":["123"],"id":1}`,
unmarshalled: &btcjson.GetBlockCmd{
Hash: "123",
Verbose: btcjson.Bool(true),
VerboseTx: btcjson.Bool(false),
Verbosity: btcjson.Int(1),
},
},
{
name: "getblock required optional1",
newCmd: func() (interface{}, error) {
// Intentionally use a source param that is
// more pointers than the destination to
// exercise that path.
verbosePtr := btcjson.Bool(true)
return btcjson.NewCmd("getblock", "123", &verbosePtr)
return btcjson.NewCmd("getblock", "123", btcjson.Int(1))
},
staticCmd: func() interface{} {
return btcjson.NewGetBlockCmd("123", btcjson.Bool(true), nil)
return btcjson.NewGetBlockCmd("123", btcjson.Int(1))
},
marshalled: `{"jsonrpc":"1.0","method":"getblock","params":["123",true],"id":1}`,
marshalled: `{"jsonrpc":"1.0","method":"getblock","params":["123",1],"id":1}`,
unmarshalled: &btcjson.GetBlockCmd{
Hash: "123",
Verbose: btcjson.Bool(true),
VerboseTx: btcjson.Bool(false),
Verbosity: btcjson.Int(1),
},
},
{
name: "getblock required optional2",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getblock", "123", true, true)
return btcjson.NewCmd("getblock", "123", btcjson.Int(2))
},
staticCmd: func() interface{} {
return btcjson.NewGetBlockCmd("123", btcjson.Bool(true), btcjson.Bool(true))
return btcjson.NewGetBlockCmd("123", btcjson.Int(2))
},
marshalled: `{"jsonrpc":"1.0","method":"getblock","params":["123",true,true],"id":1}`,
marshalled: `{"jsonrpc":"1.0","method":"getblock","params":["123",2],"id":1}`,
unmarshalled: &btcjson.GetBlockCmd{
Hash: "123",
Verbose: btcjson.Bool(true),
VerboseTx: btcjson.Bool(true),
Verbosity: btcjson.Int(2),
},
},
{
@ -210,6 +398,28 @@ func TestChainSvrCmds(t *testing.T) {
marshalled: `{"jsonrpc":"1.0","method":"getblockcount","params":[],"id":1}`,
unmarshalled: &btcjson.GetBlockCountCmd{},
},
{
name: "getblockfilter",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getblockfilter", "0000afaf")
},
staticCmd: func() interface{} {
return btcjson.NewGetBlockFilterCmd("0000afaf", nil)
},
marshalled: `{"jsonrpc":"1.0","method":"getblockfilter","params":["0000afaf"],"id":1}`,
unmarshalled: &btcjson.GetBlockFilterCmd{BlockHash: "0000afaf", FilterType: nil},
},
{
name: "getblockfilter optional filtertype",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getblockfilter", "0000afaf", "basic")
},
staticCmd: func() interface{} {
return btcjson.NewGetBlockFilterCmd("0000afaf", btcjson.NewFilterTypeName(btcjson.FilterTypeBasic))
},
marshalled: `{"jsonrpc":"1.0","method":"getblockfilter","params":["0000afaf","basic"],"id":1}`,
unmarshalled: &btcjson.GetBlockFilterCmd{BlockHash: "0000afaf", FilterType: btcjson.NewFilterTypeName(btcjson.FilterTypeBasic)},
},
{
name: "getblockhash",
newCmd: func() (interface{}, error) {
@ -235,6 +445,60 @@ func TestChainSvrCmds(t *testing.T) {
Verbose: btcjson.Bool(true),
},
},
{
name: "getblockstats height",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getblockstats", btcjson.HashOrHeight{Value: 123})
},
staticCmd: func() interface{} {
return btcjson.NewGetBlockStatsCmd(btcjson.HashOrHeight{Value: 123}, nil)
},
marshalled: `{"jsonrpc":"1.0","method":"getblockstats","params":[123],"id":1}`,
unmarshalled: &btcjson.GetBlockStatsCmd{
HashOrHeight: btcjson.HashOrHeight{Value: 123},
},
},
{
name: "getblockstats hash",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getblockstats", btcjson.HashOrHeight{Value: "deadbeef"})
},
staticCmd: func() interface{} {
return btcjson.NewGetBlockStatsCmd(btcjson.HashOrHeight{Value: "deadbeef"}, nil)
},
marshalled: `{"jsonrpc":"1.0","method":"getblockstats","params":["deadbeef"],"id":1}`,
unmarshalled: &btcjson.GetBlockStatsCmd{
HashOrHeight: btcjson.HashOrHeight{Value: "deadbeef"},
},
},
{
name: "getblockstats height optional stats",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getblockstats", btcjson.HashOrHeight{Value: 123}, []string{"avgfee", "maxfee"})
},
staticCmd: func() interface{} {
return btcjson.NewGetBlockStatsCmd(btcjson.HashOrHeight{Value: 123}, &[]string{"avgfee", "maxfee"})
},
marshalled: `{"jsonrpc":"1.0","method":"getblockstats","params":[123,["avgfee","maxfee"]],"id":1}`,
unmarshalled: &btcjson.GetBlockStatsCmd{
HashOrHeight: btcjson.HashOrHeight{Value: 123},
Stats: &[]string{"avgfee", "maxfee"},
},
},
{
name: "getblockstats hash optional stats",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getblockstats", btcjson.HashOrHeight{Value: "deadbeef"}, []string{"avgfee", "maxfee"})
},
staticCmd: func() interface{} {
return btcjson.NewGetBlockStatsCmd(btcjson.HashOrHeight{Value: "deadbeef"}, &[]string{"avgfee", "maxfee"})
},
marshalled: `{"jsonrpc":"1.0","method":"getblockstats","params":["deadbeef",["avgfee","maxfee"]],"id":1}`,
unmarshalled: &btcjson.GetBlockStatsCmd{
HashOrHeight: btcjson.HashOrHeight{Value: "deadbeef"},
Stats: &[]string{"avgfee", "maxfee"},
},
},
{
name: "getblocktemplate",
newCmd: func() (interface{}, error) {
@ -361,6 +625,44 @@ func TestChainSvrCmds(t *testing.T) {
marshalled: `{"jsonrpc":"1.0","method":"getchaintips","params":[],"id":1}`,
unmarshalled: &btcjson.GetChainTipsCmd{},
},
{
name: "getchaintxstats",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getchaintxstats")
},
staticCmd: func() interface{} {
return btcjson.NewGetChainTxStatsCmd(nil, nil)
},
marshalled: `{"jsonrpc":"1.0","method":"getchaintxstats","params":[],"id":1}`,
unmarshalled: &btcjson.GetChainTxStatsCmd{},
},
{
name: "getchaintxstats optional nblocks",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getchaintxstats", btcjson.Int32(1000))
},
staticCmd: func() interface{} {
return btcjson.NewGetChainTxStatsCmd(btcjson.Int32(1000), nil)
},
marshalled: `{"jsonrpc":"1.0","method":"getchaintxstats","params":[1000],"id":1}`,
unmarshalled: &btcjson.GetChainTxStatsCmd{
NBlocks: btcjson.Int32(1000),
},
},
{
name: "getchaintxstats optional nblocks and blockhash",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getchaintxstats", btcjson.Int32(1000), btcjson.String("0000afaf"))
},
staticCmd: func() interface{} {
return btcjson.NewGetChainTxStatsCmd(btcjson.Int32(1000), btcjson.String("0000afaf"))
},
marshalled: `{"jsonrpc":"1.0","method":"getchaintxstats","params":[1000,"0000afaf"],"id":1}`,
unmarshalled: &btcjson.GetChainTxStatsCmd{
NBlocks: btcjson.Int32(1000),
BlockHash: btcjson.String("0000afaf"),
},
},
{
name: "getconnectioncount",
newCmd: func() (interface{}, error) {
@ -515,6 +817,32 @@ func TestChainSvrCmds(t *testing.T) {
Height: btcjson.Int(123),
},
},
{
name: "getnodeaddresses",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getnodeaddresses")
},
staticCmd: func() interface{} {
return btcjson.NewGetNodeAddressesCmd(nil)
},
marshalled: `{"jsonrpc":"1.0","method":"getnodeaddresses","params":[],"id":1}`,
unmarshalled: &btcjson.GetNodeAddressesCmd{
Count: btcjson.Int32(1),
},
},
{
name: "getnodeaddresses optional",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getnodeaddresses", 10)
},
staticCmd: func() interface{} {
return btcjson.NewGetNodeAddressesCmd(btcjson.Int32(10))
},
marshalled: `{"jsonrpc":"1.0","method":"getnodeaddresses","params":[10],"id":1}`,
unmarshalled: &btcjson.GetNodeAddressesCmd{
Count: btcjson.Int32(10),
},
},
{
name: "getpeerinfo",
newCmd: func() (interface{}, error) {
@ -563,21 +891,21 @@ func TestChainSvrCmds(t *testing.T) {
marshalled: `{"jsonrpc":"1.0","method":"getrawtransaction","params":["123"],"id":1}`,
unmarshalled: &btcjson.GetRawTransactionCmd{
Txid: "123",
Verbose: btcjson.Int(0),
Verbose: btcjson.Bool(false),
},
},
{
name: "getrawtransaction optional",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getrawtransaction", "123", 1)
return btcjson.NewCmd("getrawtransaction", "123", true)
},
staticCmd: func() interface{} {
return btcjson.NewGetRawTransactionCmd("123", btcjson.Int(1))
return btcjson.NewGetRawTransactionCmd("123", btcjson.Bool(true))
},
marshalled: `{"jsonrpc":"1.0","method":"getrawtransaction","params":["123",1],"id":1}`,
marshalled: `{"jsonrpc":"1.0","method":"getrawtransaction","params":["123",true],"id":1}`,
unmarshalled: &btcjson.GetRawTransactionCmd{
Txid: "123",
Verbose: btcjson.Int(1),
Verbose: btcjson.Bool(true),
},
},
{
@ -892,32 +1220,72 @@ func TestChainSvrCmds(t *testing.T) {
FilterAddrs: &[]string{"1Address"},
},
},
{
name: "searchrawtransactions",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("searchrawtransactions", "1Address", 0, 5, 10, "null", true, []string{"1Address"})
},
staticCmd: func() interface{} {
return btcjson.NewSearchRawTransactionsCmd("1Address",
btcjson.Int(0), btcjson.Int(5), btcjson.Int(10), nil, btcjson.Bool(true), &[]string{"1Address"})
},
marshalled: `{"jsonrpc":"1.0","method":"searchrawtransactions","params":["1Address",0,5,10,null,true,["1Address"]],"id":1}`,
unmarshalled: &btcjson.SearchRawTransactionsCmd{
Address: "1Address",
Verbose: btcjson.Int(0),
Skip: btcjson.Int(5),
Count: btcjson.Int(10),
VinExtra: nil,
Reverse: btcjson.Bool(true),
FilterAddrs: &[]string{"1Address"},
},
},
{
name: "sendrawtransaction",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("sendrawtransaction", "1122")
return btcjson.NewCmd("sendrawtransaction", "1122", &btcjson.AllowHighFeesOrMaxFeeRate{})
},
staticCmd: func() interface{} {
return btcjson.NewSendRawTransactionCmd("1122", nil)
},
marshalled: `{"jsonrpc":"1.0","method":"sendrawtransaction","params":["1122"],"id":1}`,
marshalled: `{"jsonrpc":"1.0","method":"sendrawtransaction","params":["1122",false],"id":1}`,
unmarshalled: &btcjson.SendRawTransactionCmd{
HexTx: "1122",
AllowHighFees: btcjson.Bool(false),
HexTx: "1122",
FeeSetting: &btcjson.AllowHighFeesOrMaxFeeRate{
Value: btcjson.Bool(false),
},
},
},
{
name: "sendrawtransaction optional",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("sendrawtransaction", "1122", false)
return btcjson.NewCmd("sendrawtransaction", "1122", &btcjson.AllowHighFeesOrMaxFeeRate{Value: btcjson.Bool(false)})
},
staticCmd: func() interface{} {
return btcjson.NewSendRawTransactionCmd("1122", btcjson.Bool(false))
},
marshalled: `{"jsonrpc":"1.0","method":"sendrawtransaction","params":["1122",false],"id":1}`,
unmarshalled: &btcjson.SendRawTransactionCmd{
HexTx: "1122",
AllowHighFees: btcjson.Bool(false),
HexTx: "1122",
FeeSetting: &btcjson.AllowHighFeesOrMaxFeeRate{
Value: btcjson.Bool(false),
},
},
},
{
name: "sendrawtransaction optional, bitcoind >= 0.19.0",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("sendrawtransaction", "1122", &btcjson.AllowHighFeesOrMaxFeeRate{Value: btcjson.Int32(1234)})
},
staticCmd: func() interface{} {
return btcjson.NewBitcoindSendRawTransactionCmd("1122", 1234)
},
marshalled: `{"jsonrpc":"1.0","method":"sendrawtransaction","params":["1122",1234],"id":1}`,
unmarshalled: &btcjson.SendRawTransactionCmd{
HexTx: "1122",
FeeSetting: &btcjson.AllowHighFeesOrMaxFeeRate{
Value: btcjson.Int32(1234),
},
},
},
{
@ -948,6 +1316,20 @@ func TestChainSvrCmds(t *testing.T) {
GenProcLimit: btcjson.Int(6),
},
},
{
name: "signmessagewithprivkey",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("signmessagewithprivkey", "5Hue", "Hey")
},
staticCmd: func() interface{} {
return btcjson.NewSignMessageWithPrivKey("5Hue", "Hey")
},
marshalled: `{"jsonrpc":"1.0","method":"signmessagewithprivkey","params":["5Hue","Hey"],"id":1}`,
unmarshalled: &btcjson.SignMessageWithPrivKeyCmd{
PrivKey: "5Hue",
Message: "Hey",
},
},
{
name: "stop",
newCmd: func() (interface{}, error) {
@ -1086,13 +1468,24 @@ func TestChainSvrCmds(t *testing.T) {
Proof: "test",
},
},
{
name: "getdescriptorinfo",
newCmd: func() (interface{}, error) {
return btcjson.NewCmd("getdescriptorinfo", "123")
},
staticCmd: func() interface{} {
return btcjson.NewGetDescriptorInfoCmd("123")
},
marshalled: `{"jsonrpc":"1.0","method":"getdescriptorinfo","params":["123"],"id":1}`,
unmarshalled: &btcjson.GetDescriptorInfoCmd{Descriptor: "123"},
},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
// Marshal the command as created by the new static command
// creation function.
marshalled, err := btcjson.MarshalCmd(testID, test.staticCmd())
marshalled, err := btcjson.MarshalCmd(btcjson.RpcVersion1, testID, test.staticCmd())
if err != nil {
t.Errorf("MarshalCmd #%d (%s) unexpected error: %v", i,
test.name, err)
@ -1117,7 +1510,7 @@ func TestChainSvrCmds(t *testing.T) {
// Marshal the command as created by the generic new command
// creation function.
marshalled, err = btcjson.MarshalCmd(testID, cmd)
marshalled, err = btcjson.MarshalCmd(btcjson.RpcVersion1, testID, cmd)
if err != nil {
t.Errorf("MarshalCmd #%d (%s) unexpected error: %v", i,
test.name, err)

View file

@ -4,7 +4,16 @@
package btcjson
import "encoding/json"
import (
"bytes"
"encoding/hex"
"encoding/json"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
)
// GetBlockHeaderVerboseResult models the data from the getblockheader command when
// the verbose flag is set. When the verbose flag is not set, getblockheader
@ -16,6 +25,7 @@ type GetBlockHeaderVerboseResult struct {
Version int32 `json:"version"`
VersionHex string `json:"versionHex"`
MerkleRoot string `json:"merkleroot"`
ClaimTrie string `json:"nameclaimroot,omitempty"`
Time int64 `json:"time"`
Nonce uint64 `json:"nonce"`
Bits string `json:"bits"`
@ -24,27 +34,95 @@ type GetBlockHeaderVerboseResult struct {
NextHash string `json:"nextblockhash,omitempty"`
}
// GetBlockStatsResult models the data from the getblockstats command.
// Pointers are used instead of values to allow for optional fields.
type GetBlockStatsResult struct {
AverageFee *int64 `json:"avgfee,omitempty"`
AverageFeeRate *int64 `json:"avgfeerate,omitempty"`
AverageTxSize *int64 `json:"avgtxsize,omitempty"`
FeeratePercentiles *[]int64 `json:"feerate_percentiles,omitempty"`
Hash *string `json:"blockhash,omitempty"`
Height *int64 `json:"height,omitempty"`
Ins *int64 `json:"ins,omitempty"`
MaxFee *int64 `json:"maxfee,omitempty"`
MaxFeeRate *int64 `json:"maxfeerate,omitempty"`
MaxTxSize *int64 `json:"maxtxsize,omitempty"`
MedianFee *int64 `json:"medianfee,omitempty"`
MedianTime *int64 `json:"mediantime,omitempty"`
MedianTxSize *int64 `json:"mediantxsize,omitempty"`
MinFee *int64 `json:"minfee,omitempty"`
MinFeeRate *int64 `json:"minfeerate,omitempty"`
MinTxSize *int64 `json:"mintxsize,omitempty"`
Outs *int64 `json:"outs,omitempty"`
SegWitTotalSize *int64 `json:"swtotal_size,omitempty"`
SegWitTotalWeight *int64 `json:"swtotal_weight,omitempty"`
SegWitTxs *int64 `json:"swtxs,omitempty"`
Subsidy *int64 `json:"subsidy,omitempty"`
Time *int64 `json:"time,omitempty"`
TotalOut *int64 `json:"total_out,omitempty"`
TotalSize *int64 `json:"total_size,omitempty"`
TotalWeight *int64 `json:"total_weight,omitempty"`
TotalFee *int64 `json:"totalfee,omitempty"`
Txs *int64 `json:"txs,omitempty"`
UTXOIncrease *int64 `json:"utxo_increase,omitempty"`
UTXOSizeIncrease *int64 `json:"utxo_size_inc,omitempty"`
}
type GetBlockVerboseResultBase struct {
Hash string `json:"hash"`
Confirmations int64 `json:"confirmations"`
StrippedSize int32 `json:"strippedsize"`
Size int32 `json:"size"`
Weight int32 `json:"weight"`
Height int64 `json:"height"`
Version int32 `json:"version"`
VersionHex string `json:"versionHex"`
MerkleRoot string `json:"merkleroot"`
Time int64 `json:"time"`
MedianTime int64 `json:"mediantime"`
Nonce uint32 `json:"nonce"`
Bits string `json:"bits"`
Difficulty float64 `json:"difficulty"`
ChainWork string `json:"chainwork"`
PreviousHash string `json:"previousblockhash,omitempty"`
NextHash string `json:"nextblockhash,omitempty"`
ClaimTrie string `json:"nameclaimroot,omitempty"`
TxCount int `json:"nTx"` // For backwards compatibility only
}
// GetBlockVerboseResult models the data from the getblock command when the
// verbose flag is set. When the verbose flag is not set, getblock returns a
// hex-encoded string.
// verbose flag is set to 1. When the verbose flag is set to 0, getblock returns a
// hex-encoded string. When the verbose flag is set to 1, getblock returns an object
// whose tx field is an array of transaction hashes. When the verbose flag is set to 2,
// getblock returns an object whose tx field is an array of raw transactions.
// Use GetBlockVerboseTxResult to unmarshal data received from passing verbose=2 to getblock.
type GetBlockVerboseResult struct {
Hash string `json:"hash"`
Confirmations int64 `json:"confirmations"`
StrippedSize int32 `json:"strippedsize"`
Size int32 `json:"size"`
Weight int32 `json:"weight"`
Height int64 `json:"height"`
Version int32 `json:"version"`
VersionHex string `json:"versionHex"`
MerkleRoot string `json:"merkleroot"`
Tx []string `json:"tx,omitempty"`
RawTx []TxRawResult `json:"rawtx,omitempty"`
Time int64 `json:"time"`
Nonce uint32 `json:"nonce"`
Bits string `json:"bits"`
Difficulty float64 `json:"difficulty"`
PreviousHash string `json:"previousblockhash"`
NextHash string `json:"nextblockhash,omitempty"`
GetBlockVerboseResultBase
Tx []string `json:"tx"`
}
// GetBlockVerboseTxResult models the data from the getblock command when the
// verbose flag is set to 2. When the verbose flag is set to 0, getblock returns a
// hex-encoded string. When the verbose flag is set to 1, getblock returns an object
// whose tx field is an array of transaction hashes. When the verbose flag is set to 2,
// getblock returns an object whose tx field is an array of raw transactions.
// Use GetBlockVerboseResult to unmarshal data received from passing verbose=1 to getblock.
type GetBlockVerboseTxResult struct {
GetBlockVerboseResultBase
Tx []TxRawResult `json:"tx"`
}
// GetChainTxStatsResult models the data from the getchaintxstats command.
type GetChainTxStatsResult struct {
Time int64 `json:"time"`
TxCount int64 `json:"txcount"`
WindowFinalBlockHash string `json:"window_final_block_hash"`
WindowFinalBlockHeight int32 `json:"window_final_block_height"`
WindowBlockCount int32 `json:"window_block_count"`
WindowTxCount int32 `json:"window_tx_count"`
WindowInterval int32 `json:"window_interval"`
TxRate float64 `json:"txrate"`
}
// CreateMultiSigResult models the data returned from the createmultisig
@ -140,18 +218,28 @@ type GetBlockChainInfoResult struct {
Difficulty float64 `json:"difficulty"`
MedianTime int64 `json:"mediantime"`
VerificationProgress float64 `json:"verificationprogress,omitempty"`
InitialBlockDownload bool `json:"initialblockdownload,omitempty"`
Pruned bool `json:"pruned"`
PruneHeight int32 `json:"pruneheight,omitempty"`
ChainWork string `json:"chainwork,omitempty"`
SizeOnDisk int64 `json:"size_on_disk,omitempty"`
*SoftForks
*UnifiedSoftForks
}
// GetBlockFilterResult models the data returned from the getblockfilter
// command.
type GetBlockFilterResult struct {
Filter string `json:"filter"` // the hex-encoded filter data
Header string `json:"header"` // the hex-encoded filter header
}
// GetBlockTemplateResultTx models the transactions field of the
// getblocktemplate command.
type GetBlockTemplateResultTx struct {
Data string `json:"data"`
Hash string `json:"hash"`
TxID string `json:"txid"`
Depends []int64 `json:"depends"`
Fee int64 `json:"fee"`
SigOps int64 `json:"sigops"`
@ -204,32 +292,62 @@ type GetBlockTemplateResult struct {
// Block proposal from BIP 0023.
Capabilities []string `json:"capabilities,omitempty"`
RejectReasion string `json:"reject-reason,omitempty"`
ClaimTrieHash string `json:"claimtrie"`
Rules []string `json:"rules,omitempty"`
}
// GetMempoolEntryResult models the data returned from the getmempoolentry's
// fee field
type MempoolFees struct {
Base float64 `json:"base"`
Modified float64 `json:"modified"`
Ancestor float64 `json:"ancestor"`
Descendant float64 `json:"descendant"`
}
// GetMempoolEntryResult models the data returned from the getmempoolentry
// command.
type GetMempoolEntryResult struct {
Size int32 `json:"size"`
Fee float64 `json:"fee"`
ModifiedFee float64 `json:"modifiedfee"`
Time int64 `json:"time"`
Height int64 `json:"height"`
StartingPriority float64 `json:"startingpriority"`
CurrentPriority float64 `json:"currentpriority"`
DescendantCount int64 `json:"descendantcount"`
DescendantSize int64 `json:"descendantsize"`
DescendantFees float64 `json:"descendantfees"`
AncestorCount int64 `json:"ancestorcount"`
AncestorSize int64 `json:"ancestorsize"`
AncestorFees float64 `json:"ancestorfees"`
Depends []string `json:"depends"`
VSize int32 `json:"vsize"`
Size int32 `json:"size"`
Weight int64 `json:"weight"`
Fee float64 `json:"fee"`
ModifiedFee float64 `json:"modifiedfee"`
Time int64 `json:"time"`
Height int64 `json:"height"`
DescendantCount int64 `json:"descendantcount"`
DescendantSize int64 `json:"descendantsize"`
DescendantFees float64 `json:"descendantfees"`
AncestorCount int64 `json:"ancestorcount"`
AncestorSize int64 `json:"ancestorsize"`
AncestorFees float64 `json:"ancestorfees"`
WTxId string `json:"wtxid"`
Fees MempoolFees `json:"fees"`
Depends []string `json:"depends"`
SpentBy []string `json:"spentby"`
}
// GetChainTipsResult models the data returns from the getchaintips command.
type GetChainTipsResult struct {
Height int64 `json:"height"`
Hash string `json:"hash"`
BranchLen int64 `json:"branchlen"`
Status string `json:"status"`
}
// GetMempoolInfoResult models the data returned from the getmempoolinfo
// command.
type GetMempoolInfoResult struct {
Size int64 `json:"size"`
Bytes int64 `json:"bytes"`
Size int64 `json:"size"` // Current tx count
Bytes int64 `json:"bytes"` // Sum of all virtual transaction sizes as defined in BIP 141. Differs from actual serialized size because witness data is discounted
Usage int64 `json:"usage"` // Total memory usage for the mempool
TotalFee float64 `json:"total_fee"` // Total fees for the mempool in LBC, ignoring modified fees through prioritizetransaction
MemPoolMinFee float64 `json:"mempoolminfee"` // Minimum fee rate in LBC/kvB for tx to be accepted. Is the maximum of minrelaytxfee and minimum mempool fee
MinRelayTxFee float64 `json:"minrelaytxfee"` // Current minimum relay fee for transactions
UnbroadcastCount int64 `json:"unbroadcastcount"` // Current number of transactions that haven't passed initial broadcast yet
}
// NetworksResult models the networks data from the getnetworkinfo command.
@ -267,6 +385,16 @@ type GetNetworkInfoResult struct {
Warnings string `json:"warnings"`
}
// GetNodeAddressesResult models the data returned from the getnodeaddresses
// command.
type GetNodeAddressesResult struct {
// Timestamp in seconds since epoch (Jan 1 1970 GMT) keeping track of when the node was last seen
Time int64 `json:"time"`
Services uint64 `json:"services"` // The services offered
Address string `json:"address"` // The address of the node
Port uint16 `json:"port"` // The port of the node
}
// GetPeerInfoResult models the data returned from the getpeerinfo command.
type GetPeerInfoResult struct {
ID int32 `json:"id"`
@ -314,6 +442,9 @@ type ScriptPubKeyResult struct {
Hex string `json:"hex,omitempty"`
ReqSigs int32 `json:"reqSigs,omitempty"`
Type string `json:"type"`
SubType string `json:"subtype"`
IsClaim bool `json:"isclaim"`
IsSupport bool `json:"issupport"`
Addresses []string `json:"addresses,omitempty"`
}
@ -326,6 +457,64 @@ type GetTxOutResult struct {
Coinbase bool `json:"coinbase"`
}
// GetTxOutSetInfoResult models the data from the gettxoutsetinfo command.
type GetTxOutSetInfoResult struct {
Height int64 `json:"height"`
BestBlock chainhash.Hash `json:"bestblock"`
Transactions int64 `json:"transactions"`
TxOuts int64 `json:"txouts"`
BogoSize int64 `json:"bogosize"`
HashSerialized chainhash.Hash `json:"hash_serialized_2"`
DiskSize int64 `json:"disk_size"`
TotalAmount btcutil.Amount `json:"total_amount"`
}
// UnmarshalJSON unmarshals the result of the gettxoutsetinfo JSON-RPC call
func (g *GetTxOutSetInfoResult) UnmarshalJSON(data []byte) error {
// Step 1: Create type aliases of the original struct.
type Alias GetTxOutSetInfoResult
// Step 2: Create an anonymous struct with raw replacements for the special
// fields.
aux := &struct {
BestBlock string `json:"bestblock"`
HashSerialized string `json:"hash_serialized_2"`
TotalAmount float64 `json:"total_amount"`
*Alias
}{
Alias: (*Alias)(g),
}
// Step 3: Unmarshal the data into the anonymous struct.
if err := json.Unmarshal(data, &aux); err != nil {
return err
}
// Step 4: Convert the raw fields to the desired types
blockHash, err := chainhash.NewHashFromStr(aux.BestBlock)
if err != nil {
return err
}
g.BestBlock = *blockHash
serializedHash, err := chainhash.NewHashFromStr(aux.HashSerialized)
if err != nil {
return err
}
g.HashSerialized = *serializedHash
amount, err := btcutil.NewAmount(aux.TotalAmount)
if err != nil {
return err
}
g.TotalAmount = amount
return nil
}
// GetNetTotalsResult models the data returned from the getnettotals command.
type GetNetTotalsResult struct {
TotalBytesRecv uint64 `json:"totalbytesrecv"`
@ -414,6 +603,8 @@ func (v *Vin) MarshalJSON() ([]byte, error) {
type PrevOut struct {
Addresses []string `json:"addresses,omitempty"`
Value float64 `json:"value"`
IsClaim bool `json:"isclaim"`
IsSupport bool `json:"issupport"`
}
// VinPrevOut is like Vin except it includes PrevOut. It is used by searchrawtransaction
@ -504,8 +695,8 @@ type GetMiningInfoResult struct {
Errors string `json:"errors"`
Generate bool `json:"generate"`
GenProcLimit int32 `json:"genproclimit"`
HashesPerSec int64 `json:"hashespersec"`
NetworkHashPS int64 `json:"networkhashps"`
HashesPerSec float64 `json:"hashespersec"`
NetworkHashPS float64 `json:"networkhashps"`
PooledTx uint64 `json:"pooledtx"`
TestNet bool `json:"testnet"`
}
@ -532,6 +723,15 @@ type InfoChainResult struct {
Errors string `json:"errors"`
}
// ListBannedResult models the data returned from the listbanned command.
type ListBannedResult struct {
Address string `json:"address"`
BanCreated int64 `json:"ban_created"`
BannedUntil int64 `json:"banned_until"`
BanDuration int64 `json:"ban_duration"`
TimeRemaining int64 `json:"time_remaining"`
}
// TxRawResult models the data from the getrawtransaction command.
type TxRawResult struct {
Hex string `json:"hex"`
@ -540,7 +740,7 @@ type TxRawResult struct {
Size int32 `json:"size,omitempty"`
Vsize int32 `json:"vsize,omitempty"`
Weight int32 `json:"weight,omitempty"`
Version int32 `json:"version"`
Version uint32 `json:"version"`
LockTime uint32 `json:"locktime"`
Vin []Vin `json:"vin"`
Vout []Vout `json:"vout"`
@ -580,7 +780,93 @@ type TxRawDecodeResult struct {
// ValidateAddressChainResult models the data returned by the chain server
// validateaddress command.
//
// Compared to the Bitcoin Core version, this struct lacks the scriptPubKey
// field since it requires wallet access, which is outside the scope of btcd.
// Ref: https://bitcoincore.org/en/doc/0.20.0/rpc/util/validateaddress/
type ValidateAddressChainResult struct {
IsValid bool `json:"isvalid"`
Address string `json:"address,omitempty"`
IsValid bool `json:"isvalid"`
Address string `json:"address,omitempty"`
IsScript *bool `json:"isscript,omitempty"`
IsWitness *bool `json:"iswitness,omitempty"`
WitnessVersion *int32 `json:"witness_version,omitempty"`
WitnessProgram *string `json:"witness_program,omitempty"`
}
// EstimateSmartFeeResult models the data returned buy the chain server
// estimatesmartfee command
type EstimateSmartFeeResult struct {
FeeRate *float64 `json:"feerate,omitempty"`
Errors []string `json:"errors,omitempty"`
Blocks int64 `json:"blocks"`
}
var _ json.Unmarshaler = &FundRawTransactionResult{}
type rawFundRawTransactionResult struct {
Transaction string `json:"hex"`
Fee float64 `json:"fee"`
ChangePosition int `json:"changepos"`
}
// FundRawTransactionResult is the result of the fundrawtransaction JSON-RPC call
type FundRawTransactionResult struct {
Transaction *wire.MsgTx
Fee btcutil.Amount
ChangePosition int // the position of the added change output, or -1
}
// UnmarshalJSON unmarshals the result of the fundrawtransaction JSON-RPC call
func (f *FundRawTransactionResult) UnmarshalJSON(data []byte) error {
var rawRes rawFundRawTransactionResult
if err := json.Unmarshal(data, &rawRes); err != nil {
return err
}
txBytes, err := hex.DecodeString(rawRes.Transaction)
if err != nil {
return err
}
var msgTx wire.MsgTx
witnessErr := msgTx.Deserialize(bytes.NewReader(txBytes))
if witnessErr != nil {
legacyErr := msgTx.DeserializeNoWitness(bytes.NewReader(txBytes))
if legacyErr != nil {
return legacyErr
}
}
fee, err := btcutil.NewAmount(rawRes.Fee)
if err != nil {
return err
}
f.Transaction = &msgTx
f.Fee = fee
f.ChangePosition = rawRes.ChangePosition
return nil
}
// GetDescriptorInfoResult models the data from the getdescriptorinfo command.
type GetDescriptorInfoResult struct {
Descriptor string `json:"descriptor"` // descriptor in canonical form, without private keys
Checksum string `json:"checksum"` // checksum for the input descriptor
IsRange bool `json:"isrange"` // whether the descriptor is ranged
IsSolvable bool `json:"issolvable"` // whether the descriptor is solvable
HasPrivateKeys bool `json:"hasprivatekeys"` // whether the descriptor has at least one private key
}
// DeriveAddressesResult models the data from the deriveaddresses command.
type DeriveAddressesResult []string
// LoadWalletResult models the data from the loadwallet command
type LoadWalletResult struct {
Name string `json:"name"`
Warning string `json:"warning"`
}
// DumpWalletResult models the data from the dumpwallet command
type DumpWalletResult struct {
Filename string `json:"filename"`
}

View file

@ -6,9 +6,13 @@ package btcjson_test
import (
"encoding/json"
"reflect"
"testing"
"github.com/btcsuite/btcd/btcjson"
"github.com/davecgh/go-spew/spew"
"github.com/lbryio/lbcd/btcjson"
"github.com/lbryio/lbcd/chaincfg/chainhash"
btcutil "github.com/lbryio/lbcutil"
)
// TestChainSvrCustomResults ensures any results that have custom marshalling
@ -66,7 +70,7 @@ func TestChainSvrCustomResults(t *testing.T) {
},
Sequence: 4294967295,
},
expected: `{"txid":"123","vout":1,"scriptSig":{"asm":"0","hex":"00"},"prevOut":{"addresses":["addr1"],"value":0},"sequence":4294967295}`,
expected: `{"txid":"123","vout":1,"scriptSig":{"asm":"0","hex":"00"},"prevOut":{"addresses":["addr1"],"value":0,"isclaim":false,"issupport":false},"sequence":4294967295}`,
},
}
@ -86,3 +90,112 @@ func TestChainSvrCustomResults(t *testing.T) {
}
}
}
// TestGetTxOutSetInfoResult ensures that custom unmarshalling of
// GetTxOutSetInfoResult works as intended.
func TestGetTxOutSetInfoResult(t *testing.T) {
t.Parallel()
tests := []struct {
name string
result string
want btcjson.GetTxOutSetInfoResult
}{
{
name: "GetTxOutSetInfoResult - not scanning",
result: `{"height":123,"bestblock":"000000000000005f94116250e2407310463c0a7cf950f1af9ebe935b1c0687ab","transactions":1,"txouts":1,"bogosize":1,"hash_serialized_2":"9a0a561203ff052182993bc5d0cb2c620880bfafdbd80331f65fd9546c3e5c3e","disk_size":1,"total_amount":0.2}`,
want: btcjson.GetTxOutSetInfoResult{
Height: 123,
BestBlock: func() chainhash.Hash {
h, err := chainhash.NewHashFromStr("000000000000005f94116250e2407310463c0a7cf950f1af9ebe935b1c0687ab")
if err != nil {
panic(err)
}
return *h
}(),
Transactions: 1,
TxOuts: 1,
BogoSize: 1,
HashSerialized: func() chainhash.Hash {
h, err := chainhash.NewHashFromStr("9a0a561203ff052182993bc5d0cb2c620880bfafdbd80331f65fd9546c3e5c3e")
if err != nil {
panic(err)
}
return *h
}(),
DiskSize: 1,
TotalAmount: func() btcutil.Amount {
a, err := btcutil.NewAmount(0.2)
if err != nil {
panic(err)
}
return a
}(),
},
},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
var out btcjson.GetTxOutSetInfoResult
err := json.Unmarshal([]byte(test.result), &out)
if err != nil {
t.Errorf("Test #%d (%s) unexpected error: %v", i,
test.name, err)
continue
}
if !reflect.DeepEqual(out, test.want) {
t.Errorf("Test #%d (%s) unexpected unmarshalled data - "+
"got %v, want %v", i, test.name, spew.Sdump(out),
spew.Sdump(test.want))
continue
}
}
}
// TestChainSvrMiningInfoResults ensures GetMiningInfoResults are unmarshalled correctly
func TestChainSvrMiningInfoResults(t *testing.T) {
t.Parallel()
tests := []struct {
name string
result string
expected btcjson.GetMiningInfoResult
}{
{
name: "mining info with integer networkhashps",
result: `{"networkhashps": 89790618491361}`,
expected: btcjson.GetMiningInfoResult{
NetworkHashPS: 89790618491361,
},
},
{
name: "mining info with scientific notation networkhashps",
result: `{"networkhashps": 8.9790618491361e+13}`,
expected: btcjson.GetMiningInfoResult{
NetworkHashPS: 89790618491361,
},
},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
var miningInfoResult btcjson.GetMiningInfoResult
err := json.Unmarshal([]byte(test.result), &miningInfoResult)
if err != nil {
t.Errorf("Test #%d (%s) unexpected error: %v", i,
test.name, err)
continue
}
if miningInfoResult != test.expected {
t.Errorf("Test #%d (%s) unexpected marhsalled data - "+
"got %+v, want %+v", i, test.name, miningInfoResult,
test.expected)
continue
}
}
}

View file

@ -80,7 +80,7 @@ func NewStopNotifyNewTransactionsCmd() *StopNotifyNewTransactionsCmd {
// NotifyReceivedCmd defines the notifyreceived JSON-RPC command.
//
// NOTE: Deprecated. Use LoadTxFilterCmd instead.
// Deprecated: Use LoadTxFilterCmd instead.
type NotifyReceivedCmd struct {
Addresses []string
}
@ -88,7 +88,7 @@ type NotifyReceivedCmd struct {
// NewNotifyReceivedCmd returns a new instance which can be used to issue a
// notifyreceived JSON-RPC command.
//
// NOTE: Deprecated. Use NewLoadTxFilterCmd instead.
// Deprecated: Use NewLoadTxFilterCmd instead.
func NewNotifyReceivedCmd(addresses []string) *NotifyReceivedCmd {
return &NotifyReceivedCmd{
Addresses: addresses,
@ -128,7 +128,7 @@ func NewLoadTxFilterCmd(reload bool, addresses []string, outPoints []OutPoint) *
// NotifySpentCmd defines the notifyspent JSON-RPC command.
//
// NOTE: Deprecated. Use LoadTxFilterCmd instead.
// Deprecated: Use LoadTxFilterCmd instead.
type NotifySpentCmd struct {
OutPoints []OutPoint
}
@ -136,7 +136,7 @@ type NotifySpentCmd struct {
// NewNotifySpentCmd returns a new instance which can be used to issue a
// notifyspent JSON-RPC command.
//
// NOTE: Deprecated. Use NewLoadTxFilterCmd instead.
// Deprecated: Use NewLoadTxFilterCmd instead.
func NewNotifySpentCmd(outPoints []OutPoint) *NotifySpentCmd {
return &NotifySpentCmd{
OutPoints: outPoints,
@ -145,7 +145,7 @@ func NewNotifySpentCmd(outPoints []OutPoint) *NotifySpentCmd {
// StopNotifyReceivedCmd defines the stopnotifyreceived JSON-RPC command.
//
// NOTE: Deprecated. Use LoadTxFilterCmd instead.
// Deprecated: Use LoadTxFilterCmd instead.
type StopNotifyReceivedCmd struct {
Addresses []string
}
@ -153,7 +153,7 @@ type StopNotifyReceivedCmd struct {
// NewStopNotifyReceivedCmd returns a new instance which can be used to issue a
// stopnotifyreceived JSON-RPC command.
//
// NOTE: Deprecated. Use NewLoadTxFilterCmd instead.
// Deprecated: Use NewLoadTxFilterCmd instead.
func NewStopNotifyReceivedCmd(addresses []string) *StopNotifyReceivedCmd {
return &StopNotifyReceivedCmd{
Addresses: addresses,
@ -162,7 +162,7 @@ func NewStopNotifyReceivedCmd(addresses []string) *StopNotifyReceivedCmd {
// StopNotifySpentCmd defines the stopnotifyspent JSON-RPC command.
//
// NOTE: Deprecated. Use LoadTxFilterCmd instead.
// Deprecated: Use LoadTxFilterCmd instead.
type StopNotifySpentCmd struct {
OutPoints []OutPoint
}
@ -170,7 +170,7 @@ type StopNotifySpentCmd struct {
// NewStopNotifySpentCmd returns a new instance which can be used to issue a
// stopnotifyspent JSON-RPC command.
//
// NOTE: Deprecated. Use NewLoadTxFilterCmd instead.
// Deprecated: Use NewLoadTxFilterCmd instead.
func NewStopNotifySpentCmd(outPoints []OutPoint) *StopNotifySpentCmd {
return &StopNotifySpentCmd{
OutPoints: outPoints,
@ -179,7 +179,7 @@ func NewStopNotifySpentCmd(outPoints []OutPoint) *StopNotifySpentCmd {
// RescanCmd defines the rescan JSON-RPC command.
//
// NOTE: Deprecated. Use RescanBlocksCmd instead.
// Deprecated: Use RescanBlocksCmd instead.
type RescanCmd struct {
BeginBlock string
Addresses []string
@ -193,7 +193,7 @@ type RescanCmd struct {
// The parameters which are pointers indicate they are optional. Passing nil
// for optional parameters will use the default value.
//
// NOTE: Deprecated. Use NewRescanBlocksCmd instead.
// Deprecated: Use NewRescanBlocksCmd instead.
func NewRescanCmd(beginBlock string, addresses []string, outPoints []OutPoint, endBlock *string) *RescanCmd {
return &RescanCmd{
BeginBlock: beginBlock,

View file

@ -12,7 +12,7 @@ import (
"reflect"
"testing"
"github.com/btcsuite/btcd/btcjson"
"github.com/lbryio/lbcd/btcjson"
)
// TestChainSvrWsCmds tests all of the chain server websocket-specific commands
@ -233,7 +233,7 @@ func TestChainSvrWsCmds(t *testing.T) {
for i, test := range tests {
// Marshal the command as created by the new static command
// creation function.
marshalled, err := btcjson.MarshalCmd(testID, test.staticCmd())
marshalled, err := btcjson.MarshalCmd(btcjson.RpcVersion1, testID, test.staticCmd())
if err != nil {
t.Errorf("MarshalCmd #%d (%s) unexpected error: %v", i,
test.name, err)
@ -257,7 +257,7 @@ func TestChainSvrWsCmds(t *testing.T) {
// Marshal the command as created by the generic new command
// creation function.
marshalled, err = btcjson.MarshalCmd(testID, cmd)
marshalled, err = btcjson.MarshalCmd(btcjson.RpcVersion1, testID, cmd)
if err != nil {
t.Errorf("MarshalCmd #%d (%s) unexpected error: %v", i,
test.name, err)

View file

@ -12,14 +12,14 @@ const (
// BlockConnectedNtfnMethod is the legacy, deprecated method used for
// notifications from the chain server that a block has been connected.
//
// NOTE: Deprecated. Use FilteredBlockConnectedNtfnMethod instead.
// Deprecated: Use FilteredBlockConnectedNtfnMethod instead.
BlockConnectedNtfnMethod = "blockconnected"
// BlockDisconnectedNtfnMethod is the legacy, deprecated method used for
// notifications from the chain server that a block has been
// disconnected.
//
// NOTE: Deprecated. Use FilteredBlockDisconnectedNtfnMethod instead.
// Deprecated: Use FilteredBlockDisconnectedNtfnMethod instead.
BlockDisconnectedNtfnMethod = "blockdisconnected"
// FilteredBlockConnectedNtfnMethod is the new method used for
@ -35,7 +35,7 @@ const (
// notifications from the chain server that a transaction which pays to
// a registered address has been processed.
//
// NOTE: Deprecated. Use RelevantTxAcceptedNtfnMethod and
// Deprecated: Use RelevantTxAcceptedNtfnMethod and
// FilteredBlockConnectedNtfnMethod instead.
RecvTxNtfnMethod = "recvtx"
@ -43,7 +43,7 @@ const (
// notifications from the chain server that a transaction which spends a
// registered outpoint has been processed.
//
// NOTE: Deprecated. Use RelevantTxAcceptedNtfnMethod and
// Deprecated: Use RelevantTxAcceptedNtfnMethod and
// FilteredBlockConnectedNtfnMethod instead.
RedeemingTxNtfnMethod = "redeemingtx"
@ -51,14 +51,14 @@ const (
// notifications from the chain server that a legacy, deprecated rescan
// operation has finished.
//
// NOTE: Deprecated. Not used with rescanblocks command.
// Deprecated: Not used with rescanblocks command.
RescanFinishedNtfnMethod = "rescanfinished"
// RescanProgressNtfnMethod is the legacy, deprecated method used for
// notifications from the chain server that a legacy, deprecated rescan
// operation this is underway has made progress.
//
// NOTE: Deprecated. Not used with rescanblocks command.
// Deprecated: Not used with rescanblocks command.
RescanProgressNtfnMethod = "rescanprogress"
// TxAcceptedNtfnMethod is the method used for notifications from the
@ -79,7 +79,7 @@ const (
// BlockConnectedNtfn defines the blockconnected JSON-RPC notification.
//
// NOTE: Deprecated. Use FilteredBlockConnectedNtfn instead.
// Deprecated: Use FilteredBlockConnectedNtfn instead.
type BlockConnectedNtfn struct {
Hash string
Height int32
@ -89,7 +89,7 @@ type BlockConnectedNtfn struct {
// NewBlockConnectedNtfn returns a new instance which can be used to issue a
// blockconnected JSON-RPC notification.
//
// NOTE: Deprecated. Use NewFilteredBlockConnectedNtfn instead.
// Deprecated: Use NewFilteredBlockConnectedNtfn instead.
func NewBlockConnectedNtfn(hash string, height int32, time int64) *BlockConnectedNtfn {
return &BlockConnectedNtfn{
Hash: hash,
@ -100,7 +100,7 @@ func NewBlockConnectedNtfn(hash string, height int32, time int64) *BlockConnecte
// BlockDisconnectedNtfn defines the blockdisconnected JSON-RPC notification.
//
// NOTE: Deprecated. Use FilteredBlockDisconnectedNtfn instead.
// Deprecated: Use FilteredBlockDisconnectedNtfn instead.
type BlockDisconnectedNtfn struct {
Hash string
Height int32
@ -110,7 +110,7 @@ type BlockDisconnectedNtfn struct {
// NewBlockDisconnectedNtfn returns a new instance which can be used to issue a
// blockdisconnected JSON-RPC notification.
//
// NOTE: Deprecated. Use NewFilteredBlockDisconnectedNtfn instead.
// Deprecated: Use NewFilteredBlockDisconnectedNtfn instead.
func NewBlockDisconnectedNtfn(hash string, height int32, time int64) *BlockDisconnectedNtfn {
return &BlockDisconnectedNtfn{
Hash: hash,
@ -163,7 +163,7 @@ type BlockDetails struct {
// RecvTxNtfn defines the recvtx JSON-RPC notification.
//
// NOTE: Deprecated. Use RelevantTxAcceptedNtfn and FilteredBlockConnectedNtfn
// Deprecated: Use RelevantTxAcceptedNtfn and FilteredBlockConnectedNtfn
// instead.
type RecvTxNtfn struct {
HexTx string
@ -173,7 +173,7 @@ type RecvTxNtfn struct {
// NewRecvTxNtfn returns a new instance which can be used to issue a recvtx
// JSON-RPC notification.
//
// NOTE: Deprecated. Use NewRelevantTxAcceptedNtfn and
// Deprecated: Use NewRelevantTxAcceptedNtfn and
// NewFilteredBlockConnectedNtfn instead.
func NewRecvTxNtfn(hexTx string, block *BlockDetails) *RecvTxNtfn {
return &RecvTxNtfn{
@ -184,7 +184,7 @@ func NewRecvTxNtfn(hexTx string, block *BlockDetails) *RecvTxNtfn {
// RedeemingTxNtfn defines the redeemingtx JSON-RPC notification.
//
// NOTE: Deprecated. Use RelevantTxAcceptedNtfn and FilteredBlockConnectedNtfn
// Deprecated: Use RelevantTxAcceptedNtfn and FilteredBlockConnectedNtfn
// instead.
type RedeemingTxNtfn struct {
HexTx string
@ -194,7 +194,7 @@ type RedeemingTxNtfn struct {
// NewRedeemingTxNtfn returns a new instance which can be used to issue a
// redeemingtx JSON-RPC notification.
//
// NOTE: Deprecated. Use NewRelevantTxAcceptedNtfn and
// Deprecated: Use NewRelevantTxAcceptedNtfn and
// NewFilteredBlockConnectedNtfn instead.
func NewRedeemingTxNtfn(hexTx string, block *BlockDetails) *RedeemingTxNtfn {
return &RedeemingTxNtfn{
@ -205,7 +205,7 @@ func NewRedeemingTxNtfn(hexTx string, block *BlockDetails) *RedeemingTxNtfn {
// RescanFinishedNtfn defines the rescanfinished JSON-RPC notification.
//
// NOTE: Deprecated. Not used with rescanblocks command.
// Deprecated: Not used with rescanblocks command.
type RescanFinishedNtfn struct {
Hash string
Height int32
@ -215,7 +215,7 @@ type RescanFinishedNtfn struct {
// NewRescanFinishedNtfn returns a new instance which can be used to issue a
// rescanfinished JSON-RPC notification.
//
// NOTE: Deprecated. Not used with rescanblocks command.
// Deprecated: Not used with rescanblocks command.
func NewRescanFinishedNtfn(hash string, height int32, time int64) *RescanFinishedNtfn {
return &RescanFinishedNtfn{
Hash: hash,
@ -226,7 +226,7 @@ func NewRescanFinishedNtfn(hash string, height int32, time int64) *RescanFinishe
// RescanProgressNtfn defines the rescanprogress JSON-RPC notification.
//
// NOTE: Deprecated. Not used with rescanblocks command.
// Deprecated: Not used with rescanblocks command.
type RescanProgressNtfn struct {
Hash string
Height int32
@ -236,7 +236,7 @@ type RescanProgressNtfn struct {
// NewRescanProgressNtfn returns a new instance which can be used to issue a
// rescanprogress JSON-RPC notification.
//
// NOTE: Deprecated. Not used with rescanblocks command.
// Deprecated: Not used with rescanblocks command.
func NewRescanProgressNtfn(hash string, height int32, time int64) *RescanProgressNtfn {
return &RescanProgressNtfn{
Hash: hash,

Some files were not shown because too many files have changed in this diff Show more