BITCOIN WALLET SYNCHRONIZING WITH NETWORK SLOW – protopalspec

Mega FAQ (Or: Please come here for your questions first)

Qbundle Guide (Step by step setup & Bootstrap) https://burstwiki.org/wiki/QBundle
1( I want to mine or activate My account. Where do find the multiple coins?
You only need 1, an outgoing transaction or reward reassignment will set the public key. Get them from:
https://www.reddit.com/burstcoinmining/comments/7q8zve/initial_burstcoin_requests/
Or (Faucet list)
https://faucet.burstpay.net/ (if this is empty, come back later)
http://faucet.burst-coin.es
Or
https://forums.getburst.net/c/new-members-introductions/getting-started-initial-burstcoin-requests
2( I bought coins on Bittrex and want to move to my new wallet, but can't. Why?
Bittrex will only send to accounts with a public key (not a Burst requirement) so see number 1 and either set the name on the account (IF you will not mine) or set the reward recipient to the pool. Either action will enable the account and allow for transfers from Bittrex.
3( I sent coins from Poloniex/anywhere to Bittrex and they don’t show up after a considerable time. Why?
You need to set an unencrypted message on the transaction, informing Bittrex which account to send the funds to (this is in the directions on Bittrex). Did you do this? Contact Bittrex support with all the details and eventually you will get your funds.
4( How much can I make on Burst?
https://explore.burst.cryptoguru.org/tool/calculate
Gives you an average over time assuming a few things like: Average luck/100% uptime/no overlapping/fees on pool/good plot scan time (<20 seconds) if you do not have all of these, you may not see that number.
5( If I use SSD’s would I make more money?
No, it’s 95% capacity and 5% scan time that determine success. More plot area = better deadlines = better chance of forging a block, or better rates from a pool.
6( What is ‘solo’ and ‘pool’ (wasn’t his name Chewbacca?)
Solo is where you attempt to ‘forge’ (mine) a block by yourself; you get 100% of the block reward and fees. But you only receive funds if you forge, no burst for coming in second place.
Pools allow a group of miners to ‘pool’ together their resources and when a miner wins, they give the pool the winnings (this is done by the reward assignment you completed earlier), it is then divided according to different percentages and methods and burst is sent out according to pool rules (minimum pay-out, time, etc.)
7( I have been mining for 2 days and my wallet doesn’t show any Burst WHY?
Mining solo: it is win-or-lose, nothing in between, and wining is luck and plot size. Pool mining: because it costs 1 burst to send burst, the pools have either a time requirement (every X days) or a minimum amount (100 burst +) so you need to research your pool. Some pools allow for you to set the limit (cryptoGuru and similar) to be met before sending
8( How do I see what I have pending?
On CryptoGuru, based pools, it’s the ‘Pending (burst)’ column, other pools, look for the numbers next to your burst ID. One is Paid and the other pending.
9( I’m part of a pool and I forged a block, but I didn’t recieve the total value of the block, why?
A pool has 2 basic numbers that denote the pay-out method, in the format ‘XX-XX’ (i.e. 50-50) The first number is the % paid to the block forger (miner) and the second is the retained value, which is paid to historic ‘shares’ (or, past blocks that the pool didn’t win, but had a miner that was ‘close’ to winning with a good submitted deadline)
Examples of pools:
0-100 (good for <40TB)
20-80 (30-80TB)
50-50 (60-200TB)
80-20 (150-250)
100-0 (solo mine, 150+ TB)
Please note that there is an overlap as this is personal preference and just guidance; a higher historical share value means a smoother pay-out regime, which some people prefer. If fees are not factored in, or are the same on different pools, the pay-out value will be the same over a long enough period.
10( Is XXX model of hard drive good? Which one do you recommend?
CHEAP is best. If you have 2 new hard drives, both covered by warranty, get the one with the lowest cost per TB (expressed as $/TB , calculated by dividing the cost by the number of terabytes) because plot size is KING,
11( How many drives can I have on my machine?
For best performance, you can have up to 2 drives per thread (3 on a new fast AVX2 CPU). So that quad-core core-2-quad can have up to 8 drives, but a more modern i7 with 4 cores + hyper threading can squeeze 8 * 3 or 24 drives. (Performance while scanning will suffer)
12( Can I game while I mine?
Some people have done so, but you cannot have the ‘maximum’ number of drives and play games generally.
13( Can I mine Burst and GPU mine other coins?
Yes, if you CPU Mine Burst.
14( I’m GPU plotting Burst and GPU mining another coin, my plots are being corrupted, why?
My advice is dedicating a GPU to either mining or plotting, don’t try to do both.
15( What is a ‘plot’?
A plot is a file that contains Hashes, these hashes are used to mine burst. A plot is tied to an account, but they can be created (with the same account ID) on other machines and connected back to your miner(s).
16( Where can I trade/buy/sell Burst?
A list of exchanges is maintained on https://www.reddit.com/burstcoin/ (on the right, ‘Exchanges’ tab) the biggest at the moment are Bittrex and Poloniex, some offer direct Fiat-to-Burst purchase (https://indacoin.com for example)
17( Do I have to store my Burst off the exchange?
No, but it’s safer from hackers who target exchanges, if you cannot guarantee the safety or security of your home computer from Trojans etc, then it might be best to leave on an exchange (but enable 2FA security on your account PLEASE!)
18( What security measures can I take to keep my coin safe?
When you create an account, sign out and back in to your wallet (to make sure you have copied the pass phrase correctly) and keep multiple copies of the key (at least one physically printed or written down and in a safe place, better in 2 places) do not disclose the passphrase to anyone. Finally use either a local wallet or a trusted web wallet (please research before using any web wallet)
19( How can I help Burst?
Run a wallet, which will act as a node (or if you’re a programmer, contact the Dev team Bring attention to burst (without ‘shilling’ or trying to get people to buy) And help translate into your local language
Be a productive member of the community and contribute experience and knowledge if you can, or help others get into Burst.
20( Will I get coins on the fork(s) and where will they be?
There will be no new coin, and no new coins to be given/air dropped etc, the forks are upgrades to burst and there will not be a ‘classic’ or ‘new’ burst.
21( Will I need to move my Burst off of the exchange for the fork?
No, your transactions are on the block chain, which will be used on the fork, they will be visible after the move; nothing will need to be done on your side.
22( Where can I read about the progress of Burst and news in general on the community?
There is no finer place than https://www.burstcoin.ist/
23( What are the communities for Burst and the central website?
Main website: https://www.burst-coin.org/
Reddit: https://www.reddit.com/burstcoin and https://www.reddit.com/burstcoinmining/
Burstforum.net: https://www.burstforum.net/
Getburst forum: https://forums.getburst.net/
Official Facebook channel: https://m.facebook.com/groups/398967360565392
(these are the forums that are known to be supporting the current Dev Team)
Other ways to talk to the community:
Discord: https://discordapp.com/invite/RPhpjVv
Telegram (General): https://t.me/burstcoin
Telegram (Mining): https://t.me/BurstCoinMining
24( When will Burst partner up with a company?
Burst is a currency, the USD does not ‘partner up’ with a company, the DEV team will not partner up and give over to special interests.
25( Why is the DEV team anonymous?
They prefer anonymity, as it allows them to work without constant scrutiny and questions unless they wish to engage, plus the aim is for Burst to become a major contender, and this brings issues with security. They will work and produce results, they owe you nothing and if you cannot see the vision they provide then please do not ‘invest’ for short term gain.
26( When moon/Lambo/$100/make me rich?
My crystal ball is still broken, come back to the FAQ later for answer (seriously, this is a coin to hold, if you want to day-trade, good luck to you)
27( How can I better educate myself and learn about Dymaxion?
Read about the Dymaxion here: https://www.reddit.com/burstcoin/wiki/dymaxion
28( My reads are slow, why?
There are many reasons for this, if your computer has a decent spec it’s likely due to USB3 hub issues, or plugging into a USB2 hub, but other reasons can be multiple plots in the same folder, but it’s best to visit the mining subreddit. They can help more than an simple FAQ https://www.reddit.com/burstcoinmining/
29( I have a great idea for Burst (not proof of stake related)?
Awesome! Please discuss with the DEV team on discord https://discordapp.com/invite/RPhpjVv
(Please be aware that this is a public forum, you need to find who to ask/tell)
30( I have a great idea for Burst (Proof of stake related)?
No. if you want a POS, find a POS coin. On the tangle which is being implemented a POS/POW/POC coin can be created, but BURST will always be POC mined. You are welcome to implement a proof of stake coin on this!
31( Will the Dev team burn any coins?
Burst is not an ICO, so any coins will need to be bought to be burnt. You are welcome to donate, but the DEV team have no intention of burning any coins, or increasing the coin cap.
32( When will there be an IOS wallet?
IOS wallet is completed; we are waiting for it to go on the app store. Apple is the delaying factor.
33( Why do overlapping plots matter?
Plots are like collections of lottery tickets (and if only one ticket could win). Having 2 copies is not useful, and it means that you have less coverage of ‘all’ the possible numbers. It’s not good, avoid.
34( My local wallet used to run, I synchronised it before and now it says ‘stopped’. when I start it, it stops after a few seconds, what should I do?
I suggest that you change the database type to portable MariaDB (on Qbundle, at the top, ‘Database’ select, ‘change database’) and then re-import the database from scratch (see 35)
35( Synchronising the block chain is slow and I have the patience of a goldfish. What can I do?
On Qbundle , ‘Database’ select ‘Bootstrap chain’ and make sure the CryptoGuru repository is selected, then ‘start Import’ this will download and quickly stuff the local database (I suggest Portable MariaDB, see 34) (lol, loop)
36( What will the block reward be next month/will the block rewards run out in 6 months?
https://www.ecomine.earth/burstblockreward/ Rewards will carry on into 2026, but transaction fees will be a bigger % by then, and so profitable mining will continue.
37( How can I get started with Burst (wallet/mining/everything) and I need it in a video
https://www.youtube.com/watch?v=LJLhw37Lh_8 Watch and be enlightened.
38( Can I mine on multiple machines with the same account?
Yes, if you want to pool mine this can be done (but be prepared for small issues like reported size being incorrect. Just be sure to keep question 33 in mind.)
39( Why do some of my drives take forever to plot?
Most likely they are SMR drives, it’s best to plot onto another SSD and then move the finished plot/part of a plot across to the SMR drive as this is much quicker. SMR drives are fine on the read, just random writes that are terrible.
So plot an SMR drive quickly, plot to a non SMR or better still SSD drive, in as big a chunk as possible (fewer files better) and move. a version of Xplotter, called Splotter, can do this easily.
https://github.com/NoParamedic/SPlotter
40( I have a great idea; why not get listed on more exchanges!!
Exchanges list coins because of 2 reasons:
  1. Clients email and REQUESTING Burst and provide details like:
    i. https://www.burst-coin.org/information-for-exchanges
  2. The coin pays (often A LOT, seriously we’ve been asked for 50 BTC)
I suggest you speak with your exchange and ask ‘when will they offer Burst?’
41( Do you have a roadmap?
https://www.burst-coin.org/roadmap
42( Why is the price of Burst going up/down/sideways/looping through time?
The price of burst is still quite dependent upon Bitcoin, meaning that if Bitcoin gains, the value of Burst gains, if Bitcoin drops then Burst also drops. If there is news for Burst then we will see something independent of Bitcoin moving. Variations can be because of people buying in bulk or selling in bulk. There are also ‘pump and dump’ schemes that we detest, that can cause spikes in price that have nothing to do with news or Bitcoin, just sad people taking advantage of others.
43( Where is the best place to go with my mining questions?
https://www.reddit.com/burstcoinmining/
or https://t.me/BurstCoinMining
44( What hardware do you advise me to buy, is this computer good?
See question 43 for specific questions on hardware, it depends on so many variables. The ‘best’ in my opinion is a 36 bay Supermicro storage server, usually they have dual 6-core CPU’s and space for 36 drives. No USB cables, plotting and mining monster, anything else, DYOR.
45( Where do you buy your hard drives?
I have bought most from EBay in job lots, and some refurbished drives with short warranties. Everything else I have bought, from Amazon.
46( Can I mine on my Google drive/cloud based storage?
In short: no. If you want to try, and get to maybe 1 TB and then find that your local connection isn’t fast enough, or that shortly after, your account is blocked for various reasons. Please be my guest.
47( Can I mine on my NAS?
Some you can mine with the NAS (if it can run the miner, it can scan locally) but generally they’re not very fast. good for maybe 16 TB? Having a plot on a NAS and mining from another computer depends on the network speed between the NAS and scanning computer. I believe you can scan about 8 TB (maybe a bit more) and keep the scan times to within acceptable, but YMMV.
48( How can I set up a node?
No need to set up a node, just set up a wallet (version 2.0.4) or Qbundle (2.2) and it will do the rest
49( Are the passphrases secured?
I’ll leave the effort to a few people to show how secure a 12-word passphrase is: https://burstforum.net/topic/4766/the-canary-burst-early-warning-system Key point: brute forcing it will be around 13,537,856,339,904,134,474,012,675,034 years.
50( I logged into my account (maybe with a different burst ID) and see no balance!!
I have dealt with this very issue multiple times, and there are only 3 options:
  1. You have typed in the password incorrectly
  2. You have copy-pasted the password incorrectly
  3. You are trying to log into a ‘local wallet’ which the block chain has not finished updating
The last one generally leaves the burst ID the same, but old balances will show. No, this is not a security problem, and yes, windows loves to add spaces after the phrase you enter when copied, and that space is important in getting to your account.
51( Are there channels for my language?
Telegram:
Spanish: https://t.me/burstcoin_es
German: https://t.me/Burstcoinde
Italian: https://t.me/BurstCoinItalia
Forum:
Spanish: https://burst-coin.es/index.php/forum/index
Discord:
Spanish: https://discordapp.com/invite/RaaGna9
Bulgarian: https://discord.gg/r4uzTd
(there are others, please contact me to put up)
52( I am mining in a pool, and it says that my effective capacity is lower than I actually have, why?
  1. If you've not been mining for >48 hours, or just added additional capacity, it will take time.
  2. The value fluctuates (normally, +-5% but can be up to 10% at times)
  3. Read on the ‘Quick info’ tab about adjusting your deadline to compensate for changes i. revisit once a month for best results
  4. If you have overlapping plots it will also be lower so be aware of this (see question 33)
53( What pool should I join?
First of all, read question 9, after you have understood that it depends on the size (and how patient you are) select from the following list: https://www.ecomine.earth/burstpools/
54( What miner to use?
I use Blago’s miner, there are many out there but Blago’s works for me on CPU mining, it can be found in Qbundle.
55( What Wallet to use (I use windows)?
Qbundle: https://github.com/PoC-Consortium/Qbundle/releases/ guide: https://burstwiki.org/wiki/QBundle
56( What Wallet to use (Linux)?
https://package.cryptoguru.org/ for Debian and Ubuntu, for Mac. read:
https://www.ecomine.earth/macoswalletinstallguide/
57( Will i need to 'replot' after POC2 (second fork) happens?
No, there will be a tool which will optimise, but it is not CPU intensive (it basically re-shuffles your plot) and is just IO intensive. You do not need to replot.
TurboPlotter and https://github.com/PoC-Consortium/Utilities/tree/mastepoc1to2.pl are tools that will/can be used to actuate optimization, but PLEASE wait with optimization until after the hard fork.
58( Will the transaction fee always be 1 burst?
No, dynamic fees are coming in the next fork.
submitted by dan_dares to burstcoin [link] [comments]

AERGO – THE HYBRIDIZED BLOCKCHAIN

AERGO – THE HYBRIDIZED BLOCKCHAIN
The benefits of blockchain technology have not gone unnoticed, resulting in many blockchain implementations existing today. Most of these use and operate on computer networks that are easy to join and participate in. These permissionless implementations are often known as “public blockchain protocols” (such as Bitcoin and Ethereum). However, the use of an existing blockchain comes with many problems for existing businesses, mainly due to the lack of control over its features and development. While private/permissioned blockchains aim to fulfil the promise of becoming “fit-for-purpose”, they entail immense costs in terms of infrastructure and forfeit the ability to evolve at the speed of open source.
The vast majority of both public and private implementations are in the early stages of their development (and currently use 3​rd generation technologies). Projects typically focus on one type of blockchain versus the other. As such, most are only used for simple proof-of-concept (“​PoC​”) test-cases. Despite many such projects, the evolution of the blockchain stack is still stagnating, due to difficulties with enterprise IT integration and a lack of developer-friendly and easy-to-use software tools. Many implementations also lack the enterprise grade capabilities that are critical to run real business applications in both private and public deployments. The technology behind blockchain needs to mature and become more accessible for it to become a widely used and deployed architecture. Additional services and capabilities are also needed for it to be a commonly used business platform.
What is public blockchain and private blockchain? Is blockchain meant to be privatized?

PUBLIC BLOCKCHAIN
A public blockchain is a blockchain network that is fully open and decentralized, where anyone can join and participate in the network if they follow the protocol of the public chain.
The network typically has an incentivizing mechanism to encourage more participants to join the network. Bitcoin is one of the largest public blockchain networks in production today, and provides the potential for maximum participation and increased participation results in more computer “nodes” with the network.
One of the drawbacks of existing public blockchains is the substantial amount of computational power that is necessary to maintain a distributed ledger at a large scale. More specifically, to achieve consensus, each node in a network must solve a complex, resource-intensive cryptographic problem (called proof-of-work (“​PoW​”)) to ensure all nodes are synchronised and trust is maintained.
This process is complex, slow and consumes vast amounts of energy (electricity).
Another disadvantage for particular users is the openness of many existing public blockchains, which provide little to no privacy for transactions (subject to pseudonymity). They also only support a weak notion of overall system level control as they are open to anyone to participate in the network.
These are important considerations for future enterprise use of blockchain.
However, despite the above, in a public blockchain, no one person, group or organisation controls the information which is on the blockchain; or the series of rules that underpin the protocol itself. No member can unilaterally change the protocols of the blockchain and the information contained within it. Users should be able to fully trust the public blockchain and therefore put their complete trust in a third party that uses the same blockchain.
In short, public blockchains can provide maximum trust but are slow and expensive to run. They can also be extremely difficult to upgrade, because they require consensus amongst a large group of participants, many of whom may have different (and even competing) interests. Further, their trusted status may be undermined by various factors, such as malicious activity (such as so-called “front-running” by miners); by concerted behavior (e.g. when mining power is concentrated in a small number of participants); or even legal complexities that arise from having transactions recorded and validated in numerous jurisdictions all at once.

PRIVATE BLOCKCHAIN
Private blockchain is a blockchain network with united openness and decentralized compared with a public blockchain, where authorization under specific rules is required for a new node to join the network.
A private blockchain network requires an invitation and must be validated by either the network starter or by a set of rules put in place by the network starter. Businesses that set up a private blockchain, will generally set up a ​permissioned network. This places restrictions on who is allowed to participate in the network, and in what transactions. Participants need to obtain an invitation or ​permission to join. The access control mechanism can vary: for example, existing participants could decide future entrants, a regulatory authority could issue licenses for participation or a consortium could make the decisions instead. Once an entity has joined the network, it will play a role in maintaining the blockchain in a decentralized manner.
Private blockchains can (with careful system level IT design) permit greater scalability in terms of transactional throughput.
In short, private blockchains provide improved privacy, maximum throughput and are potentially cheaper to run, however they lack the level of trust and network effects that are gained from the more widely deployed public blockchains.
A lot of businesses are experimenting with building their own private blockchains. A number of these initiatives (and associated consortia) are facing difficulties to get these private blockchains into real life production systems.
Some of the reasons for this are perhaps:
  1. Building proprietary private blockchain systems requires specialist IT, cloud and developer skills and know-how that only very few firms possess.
  2. Building these using an open source model - with the intention of using, enhancing and maintaining these longer term - is extremely challenging (and software development and maintenance is not typically a core-capability for these businesses).
  3. The two above factors can significantly increase the long-term costs of such systems.
Therefore, for companies looking to integrate blockchain technology into their business processes, very careful consideration needs to be placed on the (i) trust plus interoperability (public) need versus (ii) performance plus privacy (private) requirement.
This is a fundamental paradox when dealing with combined public and private blockchains.
Due to stringent security and compliance requirements, large companies have traditionally implemented their IT systems in private computer architectures (such as private internal clouds). For the same reasons, many of these firms are experimenting with private blockchains, and choosing not to use any form of public protocol.
A number of industry consortia (such as R3 and Hyperledger) - may be limiting their potential long-term value and usefulness - by perhaps only considering one type of blockchain architecture.
In fact, much of the innovation in blockchain is actually happening in the public protocol space. This is evidenced by the sheer level of new ideas, projects and services that have been fueled by the many large scale (primarily crypto-currency driven) blockchain projects. The majority of these projects do focus on direct dApp development but this also drives certain innovations in the underlying (primarily public) blockchains that run them.
We believe that truly transformative business benefits can be achieved if a hybrid approach to blockchain is used. This approach would help maximize the benefits (and reduce the drawbacks) of a combined public and private blockchain architecture. We see the benefit in having a business architecture that - uses a public blockchain to provide enterprise integrity, immutability and a trustless network environment, for data and value (asset) transactions - coupled with a private blockchain that helps enable regulatory compliant record-keeping, privacy and that is configured and optimized for the required enterprise level performance.
The key distinguishing features of the two forms of blockchain (i.e. public and private protocols) include the level of trust and control in each system. Trust and control often vary depending on the nature of the blockchain architecture and the software consensus algorithms being used. Often increases in control can result in a decrease in decentralized trust, and vice versa. Performance throughput is also becoming a serious issue for blockchain as deployments grow.
Public blockchains, like Bitcoin as mentioned, provide the potential for maximum participation and increased participation results in more computer “nodes” within the network. A larger network of nodes running a blockchain consensus algorithm increases decentralized trust. However, control can become a serious issue in this instance, if an entity gains a majority position over these computer resources. Large blockchain networks running current generation protocols and Proof-of-Work consensus algorithms are very inefficient. They draw a huge amount of energy to run the nodes and validate new transactions. The distribution of transactions is also very slow (especially for business-critical actions).
In private blockchains (such as Hyperledger Fabric) there is much more stringent control of which parties (nodes) are part of the specific blockchain network. Throughput can be increased by using state-of-the-art computers, memory and solid-state disks; coupled with well-designed network interfaces between the nodes. However, this often results in lesser decentralized trust as the networks tend to be much smaller in size than in public protocols. Newer and more innovative consensus algorithms are required. (The figure depicts the two models).

Permissionless (Public) vs. Permissioned (Private) Blockchains
The decision on whether a business chooses a public or private blockchain will depend on a few key considerations.
Such as a careful balancing act between
(I) the need to maximize trust in the transactions
(II) control over the system and finally
(III) overall performance throughput
AERGO PLATFORM (Bridging the gap between public blockchain and private blockchain)
AERGO seeks to leverage and extend both public and private blockchains, supported by modern cloud architectures.
Just like the development, evolution and adoption of “hybrid cloud” over the past 10 years, AERGO intends to facilitate the creation of hybrid blockchain based products and business models. AERGO proposes to use state-of-the-art technology that is implemented and manifested as a simple to use practical blockchain protocol.
This protocol is intended to be designed so that it can be used in any combination of (i) a public, (ii) a private or (iii) a combined public plus private blockchain architecture configuration. This is depicted in Figure below. AERGO aims to become the de facto enterprise blockchain. One that bridges the gap between both public and private networks. A platform that uses core blockchain technology and deployment blueprints that have already been proven in real-life in-production systems across the world by Blocko (a leading blockchain technology and enterprise IT integration-services company with operations in the UK, South Korea and Hong Kong. COINSTACK-based blockchain systems have already been deployed to 25 million users in over 20 in-production systems).

AERGO bridges the Public and Private blockchain worlds for Enterprise IT
AERGO intends to combine the practicality and innovation of public blockchains, with the performance and security provided by private blockchains.
Just as with cloud computing, we hope to develop the technology to enable companies to develop and run their (dApp) applications on a secure public infrastructure. When needed, these companies will be able to easily and seamlessly migrate some (or even all) of these applications to a more high-performance private blockchain.
All of this and without losing any of the benefits of their previous public blockchain model implementation.
To enable such a comprehensive hybrid blockchain architecture, innovative technologies and a novel data bridging framework (proxy) are required to make these different types of system work together. The bridging proxy would allow bi-directional communication between multiple public and private blockchain networks.
The ability to develop, compile and embed smart contracts into such a diverse architecture will also be required. This also needs to be supported by a very high-performance and efficient virtual machine engine for future and more comprehensive smart contract development.
This principle is depicted in the illustrative diagram below.

AERGO ecosystem network illustrating public and each private chain bridged
CONCLUSION
AERGO aims to advance enterprise blockchain, by opening up a new era of mass market usage of blockchain. An era where businesses can benefit from both public and private blockchain innovation, while focusing on building, deploying and managing new services. In short, the AERGO Project aims to provide:
  1. Advanced, yet friendly and easy to use technology for developers and contractors.
  2. A secure and fast public and private blockchain cloud architecture for businesses.
  3. An open ecosystem for third parties and businesses to connect and engage with.
submitted by Joshuaaniekwe to Aergo_Official [link] [comments]

Dapps developers survey and implication on Bitcoin's blocksize debate

https://medium.com/fluence-network/dapp-survey-results-2019-a04373db6452
While the survey provides insights into many aspects of dapps and Ethereum, I want to focus on the state of Ethereum network and what it means for Bitcoin.
The Bitcoin blocksize controversy which ultimately spawns Bitcoin Cash was heavily debated since 2015. But unfortunately both sides of the debate didn't have factual data to support their hypothesis: at the time there was no other altcoin with bigger blocksize and real usage to fill it's bigger capacity. I think today Ethereum fits those descriptions: it's been working for several years, it has higher transaction capacity than Bitcoin, and said capacity does get ultilized. Vitalik Buretin is also one of the earliest "big blockers" in the space.
With that context, let's look at the survey. Section 3.2 "State of the network". Here are some quotes:
“The supernode [full node] is unstable — lots of issues with handling transactions” — Anonymous
“Geth could not finish synchronisation for 4 weeks on a good machine” — Alice
“Mainnet [is] behaving differently than testnets.” — FABG
“Slow. Huge hard drive space requirements.” — Quick Blocks
Did the ETH blockchain get pushed to the limit with it's current configuration?
ETH processes almost double the amount of transactions that BTC does daily, this number reached 3x during 2017 peak. If ETH is at the limit, what would happen to Bitcoin Cash which has ~20x of Bitcoin's capacity (if people actually use it)? 40 weeks sync would certainly push developers away wouldn't it?
submitted by fgiveme to CryptoTechnology [link] [comments]

ECOL, breaking the value island of digital assets

ECOL, breaking the value island of digital assets
In the era of economic digitization, "connection" and "through" are all principles that must be followed, whether it is the digital asset market that rises with the hot blockchain technology, or the financial enterprise data assets, because of different equipment and architecture. Even the geographical location, the cloud and the local service are different, and the data of the device and the terminal are in a dispersed state. The inability to centrally control and efficiently manage has become a major problem in the financial market.
Especially in the digital currency market under the bitcoin market, the token project emerges endlessly, and does not say all kinds of "air coins" and "cottage coins" with no actual value. Even if there are high-quality tokens with landing value, they are all in a state of large data dispersion. At different nodes, using different blockchain applications, it is difficult to trade and circulate. At the same time, due to the lack of extensive trust and consensus mechanisms between digital assets, digital currencies cannot be transferred between different block nodes, which makes the value islands of digital assets more robust, just as the traditional Internet is not transparent. The value of data that cannot be interworked is the same.
https://preview.redd.it/55eilbpbykl11.png?width=900&format=png&auto=webp&s=9ac7d3ccdaac2b92615a81065db37d77f7bd54c4
ECOL seems to be the innovative technology that solves the value island of digital assets in the era of the blockchain 3.0. It supports multi-side chain parallelism, main side chain communication, and transfer between different digital assets. It is both integrated and divided, which is safe and convenient. Here, the main chain is mainly responsible for the maintenance of security and consensus. The side chain will provide token issuance, asset exchange, cross-chain intercommunication, and main side chain parallel. Through construction, slimming and other technologies can effectively prevent block swelling, garbage accumulation, and shortening. synchronised time. Clear the obstacles for the implementation of high concurrency and lightning networks. If the carrier of consensus mechanism and value is the key to the expansion of the endogenous capacity of the blockchain, then the sidechain technology is the technology that can connect the nodes of the foreign block, that is, the blockchain expansion and the external architecture solve the slow transmission speed. The key to inefficiency. ECOL enables efficient big data exchange, improved transmission performance, full control of asset flow, and comprehensive protection of asset data information.

https://preview.redd.it/sr6pz8ucykl11.jpg?width=900&format=pjpg&auto=webp&s=b8b62a06f710234fa6ed984a729213d2130f7f28
In order to facilitate the transfer of digital assets between different block nodes, the side chain technology of ECOL is like a path, connecting different blockchains together to realize the expansion of blockchain. Cross-chain communication makes asset identification, mutual recognition, transfer and swap, equity trading, and property transfer easy. In the long run, the meaning is extremely significant.
For the financial market, the rapid development of digital assets is the advantage of high-memory, high-encryption and decentralization of virtual currency. It has great possibilities for subverting traditional financial markets. At present, it is urgent to solve The problem of islands of value in the existence of digital assets.
submitted by ECOL123 to u/ECOL123 [link] [comments]

Bitcoin Origins - part 2

Afternoon, All.
This is a continuation from the previous reddit post:
Bitcoin Origins
The following are a few notes I've been making on the original development of the tech behind Bitcoin.
This is still in early draft form so expect the layout and flow to be cleaned up over time.
Also be aware that the initial release of the Bitcoin white paper and code was what we had cut down to from earlier ideas.
This means that some of the ideas below will not correspond to what would end up being made public.
As I'm paraphrasing dim memories some of the sections are out-of-order whereby some things occurred earlier rather than later. As I recall more I'll be updating this story elsewhere for uploading when it appears more substantial.
As noted on the first post ( link supplied above ):
There is no verification of truth here.
There is absolutely no evidential proof that I had any part in the project.
Take this as just a fictional story if you wish.
Bitcoin Logo
BitCoin Origins
Six Months In A Leaky Boat
continued ...
“You’re saying that we can use this proof-of-work thing to inject electronic cash into the network and have it tied to fiat currencies, but how would the network know what the local fiat currency is to figure out the correct fiat-currency-to-electronic-cash exchange rate ?”, (2) asked.
“Maybe we could have a server that keeps a record of what the various electricity companies charge and have the software get the values from there ?”, I suggested. “Some of these new mobile phones, the smart phones, the cellular network phones in folks pockets, have GPS chips incorporated into them, right ? And everyone has them or will be getting them as they become more popular. This means everyone will have a device on them which will allow the software to include a GPS location so that the network knows which exchange rate to use for that particular minted cash.
“But how will the network know that the GPS coordinates haven’t been changed and set to another location ?”, (2) asked. “Wouldn’t that mean relying on a trusted third party again ? I thought you said we have to get away from that ? If we cannot trust a single computer for minting cash into the network then maybe we shouldn’t trust any at all ?”
“Uhh… dunno,” I replied. “I’ll get back to that later”, I said.
“Ok, ” (2) said. “How are we going to have the transactions sent to other people on the network ? All the other white papers are expecting people to connect directly to one of the trusted computers to purchase the electronic cash and to transfer it to someone else through them. If we’re not going to use a trusted computer for this and will have the proof-of-work generate the cash, then how do people receive or pay the cash ? Also: How would the network trust that the cash is valid if no computer is being used for time-stamping and validating the cash ?”
I told him I’d have to think about it.
Multiple ideas were given and discarded. He consulted with (3) about every possible solution and every one was a failure.
They either resulted in having to rely on at least one server to hook everything together or would break if multiple transaction messages were sent at the same time to different computers.
After a week or so of this I’d finally burnt myself out and decided that it’s quite possible that everyone else was correct when they said that you couldn’t solve double-spending in a digital world without depending upon a trusted third party.
I stopped emailing (2) at that point, hoping it’d all go away.
After a week he emailed me asking if I’d come up with another solution for testing.
I told him that I don’t think there is a solution and maybe he should just use part of what he had in his original white paper and rely on a trusted third party like everyone else.
He said something along the lines of “Like [redacted] I will ! You’ve taken me down this path of not trusting a single computer and that’s what I want. No-one's done that before and if we break it, it will probably change everything ! ”
I told him I’m taking a break from it all for a while.
Another week passes and he emails me again.
He said, “How are you feeling ? Sorry to be so harsh on you but I really need this to work. I’ll leave you be if that’s what you want. Just let me know when you’re able to continue.”
Another week goes by and whenever I begin to think of the problem I just say to myself “To [redacted] with him and his electronic cash problem.”
For comfort I turn to perusing through some of my old Win32 Asm proggys (I called them “proggys” because I thought of them as small, incomplete computer programs - kind of like examples and tutorials).
I also begun reminiscing about the Amiga 500 days and the proggys I made back then (late 1980’s through to mid 1990’s).
Knowing that one of the most difficult issues with electronic cash revolved around the networking architecture and how data would be propagated by the networked computers I began going through some of the discussions I had back in 2005 and 2006 with someone who was attempting to make a tank game.
I explained to him the main difference between TCP and UDP ( Transmission Control Protocol User Datagram Protocol ).
If you need data packages to arrive in a particular order with confirmation that they’ve arrived then you’d use TCP.
If you need velocity of data packets you can throw all the protocol error checking out and use UDP.
That’s one of the reasons great online multi-player games uses UDP. It reduces the latency with the data being transmitted around the network.
The main difficulty is in building the gaming system in such a way so that the data the servers and clients transmit and receive work when data packets never arrive.
TCP guarantees delivery if the network is functioning while with UDP you do not know if a particular packet ever arrived or if packets arrived in a different order to transmission due to separate packets traversing the internet via different pathways.
Many online games were usually built for single-player first and the multi-player code would be chucked into the codebase near the end of development.
This would mean that all of the game code objects and classes were made to use known values at any particular time and could not work in a UDP environment without re-architecting the entire code base from scratch.
You’d find many of the games that also included multi-player gameplay options ended up using TCP for the network communications and this made all of these games slow over the network with high latency and unplayable lag as the gameplay would be faster than the network data packets telling your computer where your opponents are located.
The various tanks games around 2005 were built as above. I convinced this person to focus on the multi-player aspect of the game because he could always add in single-player later on.
Multiple players would have to drive and fire tanks around a field while being updated continuously about the complete state of the network.
This is usually accomplished by having a single server that receives all of the current data from all the player clients and dishes out the official game state back to all of those player clients so that everyone knows who went where, who fired at what and who has been hit.
However even with using UDP there is a bottleneck in the network with the server itself only being able to process a peak number of connections and data throughput every second. It could only scale so high.
We had talked about different ways to improve this by possibly having relay servers on some of the players computers or having a more peer-to-peer like structure so that each player client only had to get the latest data from its nearest neighbours in the network and only transmit to their peers so that a fully server-less multi-player game could be created.
How the data could be moved about without someone creating a hack that could change the data packages in their favour couldn’t be figured out.
In the end he went with using a central server with both TCP and UDP depending upon what data packages were needed to be sent - general gameplay data (tank movements) via UDP and server state (for confirming who hit what) via TCP.
If a peer-to-peer network was to be used for electronic cash then to be scalable the data packages must be able to be transmitted with as high a velocity as possible. It must work with the majority of transmissions using UDP.
If two-way communication is required then a return ip/port can be included within a UDP data package or a TCP connection could be used.
I had also read and reread this thing that has been going around the crypto community for ages called the Byzantine Generals Dilemma (or worded in a similar way).
It’s supposed to be impossible to solve and at least a couple of well-known academics and crypto folks had “proven” it was impossible to solve only a few years previously. They had pretty much staked their reputations on the fact that it was unsolvable.
I thought “Wouldn’t it be absolutely hilarious if the solution to this double-spending problem is also the solution to the impossible Byzantine Generals Dilemma and could be found using ideas from the Amiga days and 3D programming and uses multi-player gaming techniques ? That would annoy the [redacted] out of the crypto community and take those elitists down a peg or two !”
(This is where you’d see the screen go all watery-wavy as the scene morphs to a time in the past when I was a moderator of the Win32 Asm community)
The assembly community and the crypto community share a lot in common.
They’re made up of some of the most brilliant folks in the computing industry where huge egos do battle against one-another.
You’d also find folks in one community existing within the other.
Both communities are made up of both light and dark actors.
The light actors are those who are very public.
They are academics, researchers, security professionals, and so on.
The dark actors are … (and that’s all I’ll say about them).
Except to say that the light crypto actors are usually doing work to undo what the dark assembly actors are doing.
It’s one [redacted] of a game !
To have a message board that was able to accommodate all actors required a few tough rules and stiff execution of them if the forum was to continue to exist.
Many of the other assembly boards were being snuffed out by government actors forcing the hosting service to shut them down.
This was mainly due to the assembly forums insistence of allowing threads to exist which showed exactly how to break and crack various websites/ networks/ software/ etc.
Whenever one of these sites were shut down the members would disperse to the various remaining assembly boards.
So we received an influx of new members every few months whenever their previous venue went up in smoke.
However they never learned from the experience ( or, at least, some of them never learned ) and they would continue to openly chat about dark subjects on our board, which put our board in danger as well.
The moderators had to be strong but fair against these new-comers, especially knowing that they (the moderators) could be actively attacked (digitally) at any time.
Occasionally one of these new members would decide to DDOS ( Distributed Denial Of Service ) us, however they apparently forgot what message board they were attempting to DDOS, and it always ended very badly for them.
We would also occasionally get someone with quite a bit of knowledge in various subjects - some of it very rare and hard-to-come-by. It would be terrible if that member left and took their knowledge with them.
They would complain that there were too many noobs asking questions on the message board and it would be better if there was a higher level of knowledge and experience needed before the noobs could enter the message board or post a question.
Once I told one of these members, “Ok then. Let’s say that thing you’ve been talking about for the past two weeks, and calling everyone else a noob for not understanding it, is the knowledge limit. I know that you only first read about it two and a half weeks ago. Let’s say I make that the limit and predate it three weeks ago and kick your butt out of this community ?"
“That’s not very fair”, he protested.
I told him, “None of us know where the next genius is coming from. The main members of this community, the ones that input more than everyone else, have come from incredibly varied environments. Some with only a few weeks knowledge are adding more to the community every week compared to members who have been with us for years. One of the members you’ve dissed in the past couple of weeks could in turn create the next piece of software that all of us use. We don’t know that. What we need to do is have a community that is absolutely inclusive for every single person on the planet no matter where they’ve come from, what their wealth is, what their nation state does, and to keep our elitism in check.”
“Ok, fair enough, I’m sorry, please don’t kick me out.” was the usual result.
These were very intelligent folks, however they had to be reminded that we are a single species moving through time and space together as one.
(This is where you’d see the screen go all watery-wavy as the scene morphs back to me figuring out this double-spending problem)
As you may tell, I don’t tolerate elitist attitudes very well.
Which also helped when I turned towards the elitist attitudes I read in some of these academic papers and crypto white papers ( some of which were more like notes than white papers ) and messages on the crypto forums and mailing lists.
“ ‘It’s impossible to solve the Byzantine Generals Problem’ they say ? Let’s see about that !”
Byzantine General’s Dilemma
The problem is written a little bit differently depending upon where you read it.
An occasional academic may be more well-read than others and becomes the “official” wording used by many others.
I’ll paraphrase it a wee bit just so you get a general idea of the problem (pun intended).
We go back to the time of the city-states.
This is before the notion of sovereign states - there’s just a bunch of individual city-states that control the surrounding nearby country side.
Every so often a bunch of these city-states would get together and form something called an empire.
Alliances would change and friends would become enemies and enemies friends on a month-to-month and year-to-year basis.
To expand the empire the bunch of city-states would send armies controlled by generals to take over an adjacent city-state.
These city-states are huge (for their time) walled cities with armies in strong fortifications.
Let’s say there are six generals from six empire city-states that surround an adjacent city-state - all generals and their armies are equidistant from each other.
They cannot trust one another because at any moment one of them may become an enemy. Or they could be an enemy pretending to be a friend.
Due to the defensive forces of the defending city-state, the six generals know that they could take the city if every one of them attacked at the same time from around the city.
But if only a few attacked and the others retreated then the attackers would be wiped out and the surviving city-states, with their generals and their armies intact, would end up over-powering and enslaving their previous friendly city-states.
No-one could trust any other.
(This has massive parallels with modern day sovereign nations and their playing of the game with weapons, armies/air forces/navies, economics, currency, trade agreements, banks, education, health, wealth, and so on)
The generals have to send a message to the other generals telling them if they’re going to attack or retreat.
The problem is that a general could send a message to the general to his left saying that he’ll attack and send a second message to the general to his right that he will retreat.
Some possible solutions said that there should be two lieutenants to receive the message from the general and that they could check each others message to confirm that they are indeed identical before passing the messages onto the left and right messengers.
However the messengers in turn could change the message from “attack” to “retreat” or vice versa or not deliver the message at all.
Plus the generals, once a message has been sent out as “attack” could turn around and retreat, or vice versa.
I thought to myself, “I bet the folks who thought up this problem are feeling pretty damn smug about themselves.”
However I was a moderator of an assembly community.
I’d translated the DirectX8 C++ COM headers into their x86 assembly equivalent (using techniques built by others far more smarter than me, and with help for some files when DX8.1 was translated), built a PIC micro controller assembler in x86 assembly language, and many other things.
And because I've done six impossible things this morning, why not round it off with creating a solution to the Byzantine Generals Dilemma !
Elitist ego ? What elitist ego ? They’re all amateurs !
Let us begin:
“Ok,” I thought to myself. “let’s start at the beginning. We need a network. What does that look like ?”
The Generals are going to be represented as computers. The servers in the network. The nodes.
The messages are going to be the data travelling between them.
Transactions will be used as the first example of data.
For those reading, hold your hands in front of you - touch the bottom of the palms together with the fingers far apart, thumbs touching each other, twist your elbow and wrists so that the fingers are pointing upwards - slightly curved.
Fingers as Nodes
These are the nodes in the network.
The node where the thumbs touch is your own node.
No node can trust each other.
For this network structure to work, it must work even with every single node actively hostile toward one another.
“Surely the network can trust my node. I’m good ! “, you may say to yourself.
But you would be wrong.
This network is not about you. It must exist even when you don’t.
If there were a hundred nodes then it’d be ninety-nine to one against you.
As far as the network is concerned, there’s ninety-nine nodes that cannot trust you compared to your one.
So accepting that all nodes cannot trust one another, plus they are actively hostile toward one another, we can …
“But hang on ! ”, you say. “What do you mean ‘actively hostile’ ? Surely they’re not all hostile ? ”
Even if most of the time nodes will play nice with one another, the rules of the game must be structured in such a way that they will work even if all participants were actively hostile toward one another .
Because if it still worked with everyone having a go at each other then you would’ve built something that could last for a very long time.
You could build something whereby sovereign nations could no-longer undermine other sovereign nations.
It would be the great equaliser that would allow stronger nations to stop screwing around with weaker nations.
It’s the ultimate golf handicapping system. Everyone could play this game.
Kind of like my moderating style from the assembly days.
So we have these hostile nodes.
It has to be able to work with any type of message or data package. Initially it will be built for electronic cash transactions.
I will type it as "messages (transactions)" below to indicate that the messages are the messages in the Byzantine Generals Dilemma and that the message could be any data whatsoever - "transactions" just being the first. Plus in a roundabout way a message is also a transaction whereby a transaction doesn't have to be only for electronic cash - it's just an indication of what items are being transacted.
We want to send messages (transactions) between them and make sure everyone agrees that the messages (transactions) are correct.
That implies that every single node would have to store an exact copy of all the messages (transactions) and be able to read through them and confirm that they are valid.
And whenever a node receives a message (transaction) it would check it for validity and if it’s ok then that message (transaction) would be passed onto the adjacent nodes.
But how to stop a node changing the message (transaction) contents and sending different results to two adjacent nodes ?
How about taking the possibility of messages (transactions) being able to be changed out of the problem completely ?
We could using private/public keys to sign the messages (transactions) so that they couldn’t be changed.
The owner could sign a message (transaction) with the owners private key and everyone could check its validity with the owners public key, but not be able to change it.
Right. The messaging ( transactions/ data/ etc ) part of the problem is partially solved.
Now how do I solve the generals problem so that they all play nicely with one another ?
If we can make sure all generals (nodes) can get the identical data and that they can all validate that the data is identical and unchanged then the Byzantine Generals Dilemma would be solved.
Data Chunks
It became apparent that every major node on a network would have to store an entire copy of all of the data so that they could verify that the data was correct and hadn’t been modified.
The data would probably end up looking like a list or stack, with each incoming valid message (transaction) placed on top of the previous messages (transactions).
What looks like a stack but hasn’t got the memory restrictions like a normal assembly stack ?
When I was reminiscing about the Amiga 500 days I recalled having to muck about with IFF.
That’s the Interchange File Format.
The basics of it is like this:
In a plain text file there are chunks of data.
Each chunk of data begins with a chunk identifier - four characters that indicate to a program what type of data resides within that chunk (example “WAVE”, “FORM”, “NAME”).
An IFF file can have many data chunks of differing types.
The .AVI (audio/video), .ILBM (bitmap) and .WAV (audio wave) file formats are based upon the IFF.
I thought, “What if one of these data chunks was called ‘MSG ’, ‘DATA’ or ‘TSTN’ (TranSacTioN) ? ”
That might work.
Where would the proof-of-work thing come into play ?
Let’s say we replace the four-character-identifier with a header so that the proof-of-work can be done on it ?
That means the header would now include an identifier for what type of data is included within the chunk, plus a value used to modify the difficulty for generating a hash (the number of zeros needed to prepend the generated hash), a random value which increments as hashes are attempted so that the header data is slightly different for each hash attempt, plus the data itself.
But once a correct hash is generated, that particular node would mint electronic cash to pay for the electricity used.
Remember: The electronic cash is supposed to cover the actual fiat currency costs involved in doing the proof-of-work computations.
As the owner of the node computer is paid by an employer in fiat currency and has paid personal tax on it, and they have used that fiat currency to pay their electricity provider (which in turn pays company, state and value-added or goods&service taxes), then the electronic cash is equivalent to swapping your own money for a soft drink can from a vending machine.
Except, due to the media of this system, you’d be able to go to another vending machine and reenter your soft drink can for a refund in fiat currency again ( minus a restocking fee ) and the vending machine could be anywhere on the planet.
That means an extra message (transaction) would have to be included within the chunks data for the minted electronic cash.
If there must be at least two messages (transactions) within a data chunk - the actual message (transaction) plus the message (transaction) for the node that generates the hash - then maybe there could be more messages (transactions) stored in each data chunk ? How would a bunch of messages (transactions) be stored inside a data chunk ?
I remembered learning about binary space partitioning around 2006.
BSP trees were used to store 3D graphic polygons that were able to be quickly traversed so that a game could decide which scenery to display to the game player.
Quake 3 Arena and Medal of Honour: Allied Assault ( which uses Q3A codebase) used BSP trees for storing the scenery. Wherever the player was looking the tree would be traversed and only the polygons (triangles) that were viewable would be rendered by the graphics chip. Try to think of the players view in a game was like a searchlight beam and whatever the light touches is rendered onto a persons computer screen and everything else is ignored- unseen and not rendered.
“I wonder if I could break the transactions up into a binary space partitioned tree ?”
For those interested, a wee bit of light reading is here: Binary Space Partitioning
A binary space partitioned tree begins at one polygon and uses its surface as a plane to cut throughout the rest of the scene.
This kind of plane: Geometry Plane
Each polygon the plane hits gets sliced in two.
Note: The ‘node’ word used below is used for talking about the nodes in a BSP tree - not nodes in a computer network. Think of nodes as where an actual tree branch splits into two smaller branches.
All the polygons in front of the plane go into the left branch (node) and all the polygons behind the plane go into the right branch (node).
Traversing each branch (node) in turn, a polygon is chosen closest to the middle of the remaining branch (node) scenery and another plane slices the branch (node) in two.
The traversal continues until the entire scenery has been sliced up into left/ right (or up/ down) branches (nodes) and they all end up at the leaves (nodes) which store the actual polygon geometry.
If we use the messages (transactions) as the equivalent of the polygon geometry then we could have a bunch of messages (transactions) in the leaf nodes at the bottom of a tree-like structure inside a data chunk.
Instead of a group of triangle vertices ( polygon geometry ) there would be a single message (transaction).
But how to connect them all up ?
A BSP tree is linked up by having a parent node pointing to the two child nodes, but that’s in memory.
The BSP file that’s stored on a disc drive can be easily modified ( easy as in it’s possible instead of impossible ).
The messages (transactions) within a chunk cannot be allowed to be changed.
What if, instead of memory pointers or offsets pointing parents to children we use one of those crypto hashing functions ?
The bottom-most leaf nodes could use data specifically from their message (transaction) to generate a node hash, right ?
Parent Branch nodes could create a hash using the hashes of their two children hashes.
This would create a tree-like structure within a data chunk where the topmost parent hash could be included within the data chunks proof-of-work header.
This would allow all the messages (transactions) to be locked into a tree that doesn’t allow them to be modified because all parent node hashes would have to be recalculated and the trees root hash would be different from the original generated hash.
And that would mean that the entire proof-of-work hash value would be changed.
The same mechanism used to transfer the transaction data around the network would also be used to send the chunks of data.
If a network node received a changed dataChunk and compared it with one they already held then they’d notice the proof-of-work is different and would know someone was attempting to modify the data.
Bloody [redacted] ! I think this might actually work.
I email (2) to inform him that I was again making progress on the issue.
I explained the idea of having a simplified BSP tree to store the messages (transactions) into a dataChunk and have them all hashed together into a tree with the proof-of-work plus parent hash at the top.
He said, “If I change the transaction stuff to use this method I’m going to have to throw out half my white paper and a third of my code”.
“Well, “ I replied. “You can keep using your current transaction stuff if you want. It can never work in a no-trust environment but if that makes you happy then stay with it. For me - I’m going to take the red pill and continue down this path and see where it gets me. I’m also working on solving the Byzantine Generals Dilemma.”
“Ok. ok”, he said. “I’ll go with what you’ve come up with. But what are you stuffing about with the Byzantine problem ? It’s an impossible crypto puzzle and has nothing to do with electronic cash.”
“It has everything to do with an actual working electronic cash system”, I said. “If it can be solved then we could use a peer-to-peer network for transferring all the data about the place ! Kinda like Napster.”
“Didn’t Napster get shut down because it used a central server ?”, (2) retorted.
“What’s another peer-to-peer network ? IRC ? Tor ?, BitTorrent ?”
“I think we can use IRC to hold the initial node addresses until such time the network is big enough for large permanent nodes to appear”, (2) suggested.
(2) asked, “What’s to stop nodes from sending different dataChunks to other nodes ? If they’re just stacked on top of one-another then they can be swapped in and out at any time. That’s why a third party server is needed for setting the official time on the network for the transactions. Someone could create different transactions and change the time to whatever they want if they can use whatever time they choose.”
I said I’ll think on it some more.
A Kronos Stamp Server
If a third party cannot be used for a time stamp server then we’d have to reevaluate what is meant by time in a computer network.
What if how people think about time is actually wrong and everyone is assuming it to be something that it really isn’t ?
If you hold one fist in front of you to represent time - call it ‘now’ time.
Now Time
If you hold another fist after the first fist you can call it ‘after now’ time.
After Now Time
If you hold another fist before the first fist you can call it ‘before now’ time.
Before Now Time
What we’re actually looking at is a chronological order stamp. The actual time itself is pretty much irrelevant except for when comparing two things in their chronological order.
It should work whether the ‘now’ time is the time shown on your clock/watch right now, or on a date two hundred years from now, or 1253BC ( Tuesday ).
The before/ now/ after can be adjusted accordingly:
after ( Wednesday )
now ( 1253BC Tuesday )
before ( Monday )
And if the time value used is the time shown on your clock, is it the same as the time value shown on your watch ? On the microwave ? DVD player ? Computer ? Phone ? You may find that all the time pieces inside your own home vary by a few seconds or even a few minutes !
In an office almost every single person has a timepiece that has a different time to everyone else - even if it’s only different by a few milliseconds.
Does that mean as you walk from your kitchen ( showing 2:02pm on the wall ) into the lounge ( showing 2:01 on the DVD player ) that’s you’ve just entered a time portal and been magically transported back in time by a minute ?
Of course not. They’re all equally valid time values that humans have made up to be roughly synchronised with one-another.
All that really matters is the range of valid time values used to indicate “This is Now”, “This is Next” or “This was Before”.
If the network nodes all agree on what range of time values should be valid to be “now” or “near now” then each node could use its own time value in any data messages (transactions or dataChunks) and no third party timestamp server would be required.
I email (2) and let him know the time-stamp server issue has been resolved by having the nodes use a Kronos-Stamp.
“What the [redacted] is a ‘Kronos-Stamp’ ? ”, (2) asked.
I give him the explanation I gave to you ( the Reader ) above.
“But what’s this ‘Kronos’ word mean ?”, (2) asked.
“It’s short for “Chronological Order. It’s a Chronological Order Stamp. We don’t need a Time-Stamp any more,” I replied.
“But what’s with the ‘K’ ?”
“To annoy all those folks who’d rather get furious about misspelt words than try and understand the concept that’s being explained. ”
“Well, the crypto community won’t like it spelt like that. We’re going to have to call it a Time-Stamp server because that’s what they understand,” (2) said.
I said, “Time-Stamps are for systems using third party servers. Chronological Order Stamps are for peer-to-peer networks.”
“Ok,” (2) said. “We can use this time thing for making sure the dataChunks are in a chronological order but what stops someone from just changing the time of their computer to be a little earlier than someone else and having their version of the data accepted by everyone else?”
I said I’ll think on it some more.
A Chain of Data Chunks
On another project I was rereading some information about rendering graphical data.
In 3D graphics triangles are used to create any object you see onscreen.
Example of Triangle types:
Triangle Types
Each numbered dot represents a vertex.
The data for the vertices are placed into arrays called buffers.
They’re just a long list of data points which are loaded onto a graphics card and told to be drawn.
Triangle Strip
A triangle strip is a strip of triangles which share the data points from the previous triangle.
Each triangle in the strip is drawn alternating between clockwise/counter-clockwise (indicated by the red and green arrows)
The very first triangle must have all of its vertices added (all three vertices 1,2,3)
Every other triangle in the strip only has to add one more single vertex and reuse the previous two vertices.
The second triangle just adds the data for the vertex (4) and reuses vertices 2 and 3 that’s already embedded inside the strip.
This makes the strip incredibly compact in size for the data it’s meant to represent plus locks each triangle inside the strip and they cannot be accidentally used elsewhere.
If a triangle was wanted to be drawn in a different order then an entirely new triangle strip would have to be created.
A key side affect is that a triangle strip can be set to start drawing at any vertices (except vertices 2 and 3) and the entire strip from that data point onwards will be drawn.
I was staring at this for a long time thinking “This could be used for the electronic cash project somehow, but how exactly ?”
I kept going through the explanation for the triangle strip again and again trying to understand what I was seeing.
Then it dawned on me.
The triangles were the data in a triangle strip.
The chunks were the data in the electronic cash project.
If the triangles were actually the dataChunks then that means the vertices were the proof-of-work header, with the embedded root hash for the messages/ transactions.
The lines in the triangle strip represented the reuse of previous vertex data.
So that means I could reuse the proof-of-work hash from a previous dataChunk and embed that into the next proof-of-work as well !
And just like a triangle strip the dataChunks couldn’t be moved elsewhere unless all the surrounding proof-of-work hashes were redone again.
It reinforces the Kronos Stamp by embedding the previous proof-of-work hash into it so we know what came before now and what was next after previous.
If the entire network was using their cpu power to generate these proof-of-work hashes then a hostile actor would need half the processing power to get a fifty percent chance of generating the proof-of-work hash for a block and modifying the data.
However every second block on average would be generated by an opposing hostile actor and so whatever the fifty percent hostile actor was attempting to do wouldn’t last for very long.
DataChunk Chain
I needed to have some of the math for this looked at to see if I was on the right track.
I email (2) and let him know about this idea of hooking together the dataChunks like a chain so that they couldn’t be modified without redoing the proof-of-work hashing.
He liked the idea of a chain.
I said, “You see how all the appended dataChunk headers reuse the hash from the previous dataChunk header ? Take a look at the very first dataChunk.”
“What’s so special about that” , (2) asks.
“Well,” I say. “The first dataChunk header hasn’t got any previous hashes it can use, so in the beginning it will have to use a made up ‘previous hash’ in its header. In the beginning it has to use a manually create hash. In the beginning… get it?”
“What ?”, (2) asks.
“The very first data chunk is the Genesis dataChunk. In the beginning there is the Genesis dataChunk”, I reply.
He said he likes that idea very much as he’d just started being involved in a church in the past year or so.
I ask him to get the other cryptos he’s in contact with to play around with the numbers and see if this would work.
(2) asked, “Hang on. How would this solve the double-spending problem ?”
I'll stop this story here for now and post a follow-up depending upon its reception.
I guess I've found reddit's posting character limit. 40,000 characters. There was going to be another 10,000 characters in this post however that will have to wait till next time.
Bitcoin Origins - part 3
This is a continuation from the previous reddit post:
Bitcoin Origins
Cheers,
Phil
(Scronty)
vu.hn
submitted by Scronty to Bitcoin [link] [comments]

Bitcoin Origins - part 2

Afternoon, All.
This is a continuation from the previous reddit post:
Bitcoin Origins
The following are a few notes I've been making on the original development of the tech behind Bitcoin.
This is still in early draft form so expect the layout and flow to be cleaned up over time.
Also be aware that the initial release of the Bitcoin white paper and code was what we had cut down to from earlier ideas.
This means that some of the ideas below will not correspond to what would end up being made public.
As I'm paraphrasing dim memories some of the sections are out-of-order whereby some things occurred earlier rather than later. As I recall more I'll be updating this story elsewhere for uploading when it appears more substantial.
As noted on the first post ( link supplied above ):
There is no verification of truth here.
There is absolutely no evidential proof that I had any part in the project.
Take this as just a fictional story if you wish.
Bitcoin Logo
BitCoin Origins
Six Months In A Leaky Boat
continued ...
“You’re saying that we can use this proof-of-work thing to inject electronic cash into the network and have it tied to fiat currencies, but how would the network know what the local fiat currency is to figure out the correct fiat-currency-to-electronic-cash exchange rate ?”, (2) asked.
“Maybe we could have a server that keeps a record of what the various electricity companies charge and have the software get the values from there ?”, I suggested. “Some of these new mobile phones, the smart phones, the cellular network phones in folks pockets, have GPS chips incorporated into them, right ? And everyone has them or will be getting them as they become more popular. This means everyone will have a device on them which will allow the software to include a GPS location so that the network knows which exchange rate to use for that particular minted cash.
“But how will the network know that the GPS coordinates haven’t been changed and set to another location ?”, (2) asked. “Wouldn’t that mean relying on a trusted third party again ? I thought you said we have to get away from that ? If we cannot trust a single computer for minting cash into the network then maybe we shouldn’t trust any at all ?”
“Uhh… dunno,” I replied. “I’ll get back to that later”, I said.
“Ok, ” (2) said. “How are we going to have the transactions sent to other people on the network ? All the other white papers are expecting people to connect directly to one of the trusted computers to purchase the electronic cash and to transfer it to someone else through them. If we’re not going to use a trusted computer for this and will have the proof-of-work generate the cash, then how do people receive or pay the cash ? Also: How would the network trust that the cash is valid if no computer is being used for time-stamping and validating the cash ?”
I told him I’d have to think about it.
Multiple ideas were given and discarded. He consulted with (3) about every possible solution and every one was a failure.
They either resulted in having to rely on at least one server to hook everything together or would break if multiple transaction messages were sent at the same time to different computers.
After a week or so of this I’d finally burnt myself out and decided that it’s quite possible that everyone else was correct when they said that you couldn’t solve double-spending in a digital world without depending upon a trusted third party.
I stopped emailing (2) at that point, hoping it’d all go away.
After a week he emailed me asking if I’d come up with another solution for testing.
I told him that I don’t think there is a solution and maybe he should just use part of what he had in his original white paper and rely on a trusted third party like everyone else.
He said something along the lines of “Like [redacted] I will ! You’ve taken me down this path of not trusting a single computer and that’s what I want. No-one's done that before and if we break it, it will probably change everything ! ”
I told him I’m taking a break from it all for a while.
Another week passes and he emails me again.
He said, “How are you feeling ? Sorry to be so harsh on you but I really need this to work. I’ll leave you be if that’s what you want. Just let me know when you’re able to continue.”
Another week goes by and whenever I begin to think of the problem I just say to myself “To [redacted] with him and his electronic cash problem.”
For comfort I turn to perusing through some of my old Win32 Asm proggys (I called them “proggys” because I thought of them as small, incomplete computer programs - kind of like examples and tutorials).
I also begun reminiscing about the Amiga 500 days and the proggys I made back then (late 1980’s through to mid 1990’s).
Knowing that one of the most difficult issues with electronic cash revolved around the networking architecture and how data would be propagated by the networked computers I began going through some of the discussions I had back in 2005 and 2006 with someone who was attempting to make a tank game.
I explained to him the main difference between TCP and UDP ( Transmission Control Protocol User Datagram Protocol ).
If you need data packages to arrive in a particular order with confirmation that they’ve arrived then you’d use TCP.
If you need velocity of data packets you can throw all the protocol error checking out and use UDP.
That’s one of the reasons great online multi-player games uses UDP. It reduces the latency with the data being transmitted around the network.
The main difficulty is in building the gaming system in such a way so that the data the servers and clients transmit and receive work when data packets never arrive.
TCP guarantees delivery if the network is functioning while with UDP you do not know if a particular packet ever arrived or if packets arrived in a different order to transmission due to separate packets traversing the internet via different pathways.
Many online games were usually built for single-player first and the multi-player code would be chucked into the codebase near the end of development.
This would mean that all of the game code objects and classes were made to use known values at any particular time and could not work in a UDP environment without re-architecting the entire code base from scratch.
You’d find many of the games that also included multi-player gameplay options ended up using TCP for the network communications and this made all of these games slow over the network with high latency and unplayable lag as the gameplay would be faster than the network data packets telling your computer where your opponents are located.
The various tanks games around 2005 were built as above. I convinced this person to focus on the multi-player aspect of the game because he could always add in single-player later on.
Multiple players would have to drive and fire tanks around a field while being updated continuously about the complete state of the network.
This is usually accomplished by having a single server that receives all of the current data from all the player clients and dishes out the official game state back to all of those player clients so that everyone knows who went where, who fired at what and who has been hit.
However even with using UDP there is a bottleneck in the network with the server itself only being able to process a peak number of connections and data throughput every second. It could only scale so high.
We had talked about different ways to improve this by possibly having relay servers on some of the players computers or having a more peer-to-peer like structure so that each player client only had to get the latest data from its nearest neighbours in the network and only transmit to their peers so that a fully server-less multi-player game could be created.
How the data could be moved about without someone creating a hack that could change the data packages in their favour couldn’t be figured out.
In the end he went with using a central server with both TCP and UDP depending upon what data packages were needed to be sent - general gameplay data (tank movements) via UDP and server state (for confirming who hit what) via TCP.
If a peer-to-peer network was to be used for electronic cash then to be scalable the data packages must be able to be transmitted with as high a velocity as possible. It must work with the majority of transmissions using UDP.
If two-way communication is required then a return ip/port can be included within a UDP data package or a TCP connection could be used.
I had also read and reread this thing that has been going around the crypto community for ages called the Byzantine Generals Dilemma (or worded in a similar way).
It’s supposed to be impossible to solve and at least a couple of well-known academics and crypto folks had “proven” it was impossible to solve only a few years previously. They had pretty much staked their reputations on the fact that it was unsolvable.
I thought “Wouldn’t it be absolutely hilarious if the solution to this double-spending problem is also the solution to the impossible Byzantine Generals Dilemma and could be found using ideas from the Amiga days and 3D programming and uses multi-player gaming techniques ? That would annoy the [redacted] out of the crypto community and take those elitists down a peg or two !”
(This is where you’d see the screen go all watery-wavy as the scene morphs to a time in the past when I was a moderator of the Win32 Asm community)
The assembly community and the crypto community share a lot in common.
They’re made up of some of the most brilliant folks in the computing industry where huge egos do battle against one-another.
You’d also find folks in one community existing within the other.
Both communities are made up of both light and dark actors.
The light actors are those who are very public.
They are academics, researchers, security professionals, and so on.
The dark actors are … (and that’s all I’ll say about them).
Except to say that the light crypto actors are usually doing work to undo what the dark assembly actors are doing.
It’s one [redacted] of a game !
To have a message board that was able to accommodate all actors required a few tough rules and stiff execution of them if the forum was to continue to exist.
Many of the other assembly boards were being snuffed out by government actors forcing the hosting service to shut them down.
This was mainly due to the assembly forums insistence of allowing threads to exist which showed exactly how to break and crack various websites/ networks/ software/ etc.
Whenever one of these sites were shut down the members would disperse to the various remaining assembly boards.
So we received an influx of new members every few months whenever their previous venue went up in smoke.
However they never learned from the experience ( or, at least, some of them never learned ) and they would continue to openly chat about dark subjects on our board, which put our board in danger as well.
The moderators had to be strong but fair against these new-comers, especially knowing that they (the moderators) could be actively attacked (digitally) at any time.
Occasionally one of these new members would decide to DDOS ( Distributed Denial Of Service ) us, however they apparently forgot what message board they were attempting to DDOS, and it always ended very badly for them.
We would also occasionally get someone with quite a bit of knowledge in various subjects - some of it very rare and hard-to-come-by. It would be terrible if that member left and took their knowledge with them.
They would complain that there were too many noobs asking questions on the message board and it would be better if there was a higher level of knowledge and experience needed before the noobs could enter the message board or post a question.
Once I told one of these members, “Ok then. Let’s say that thing you’ve been talking about for the past two weeks, and calling everyone else a noob for not understanding it, is the knowledge limit. I know that you only first read about it two and a half weeks ago. Let’s say I make that the limit and predate it three weeks ago and kick your butt out of this community ?"
“That’s not very fair”, he protested.
I told him, “None of us know where the next genius is coming from. The main members of this community, the ones that input more than everyone else, have come from incredibly varied environments. Some with only a few weeks knowledge are adding more to the community every week compared to members who have been with us for years. One of the members you’ve dissed in the past couple of weeks could in turn create the next piece of software that all of us use. We don’t know that. What we need to do is have a community that is absolutely inclusive for every single person on the planet no matter where they’ve come from, what their wealth is, what their nation state does, and to keep our elitism in check.”
“Ok, fair enough, I’m sorry, please don’t kick me out.” was the usual result.
These were very intelligent folks, however they had to be reminded that we are a single species moving through time and space together as one.
(This is where you’d see the screen go all watery-wavy as the scene morphs back to me figuring out this double-spending problem)
As you may tell, I don’t tolerate elitist attitudes very well.
Which also helped when I turned towards the elitist attitudes I read in some of these academic papers and crypto white papers ( some of which were more like notes than white papers ) and messages on the crypto forums and mailing lists.
“ ‘It’s impossible to solve the Byzantine Generals Problem’ they say ? Let’s see about that !”
Byzantine General’s Dilemma
The problem is written a little bit differently depending upon where you read it.
An occasional academic may be more well-read than others and becomes the “official” wording used by many others.
I’ll paraphrase it a wee bit just so you get a general idea of the problem (pun intended).
We go back to the time of the city-states.
This is before the notion of sovereign states - there’s just a bunch of individual city-states that control the surrounding nearby country side.
Every so often a bunch of these city-states would get together and form something called an empire.
Alliances would change and friends would become enemies and enemies friends on a month-to-month and year-to-year basis.
To expand the empire the bunch of city-states would send armies controlled by generals to take over an adjacent city-state.
These city-states are huge (for their time) walled cities with armies in strong fortifications.
Let’s say there are six generals from six empire city-states that surround an adjacent city-state - all generals and their armies are equidistant from each other.
They cannot trust one another because at any moment one of them may become an enemy. Or they could be an enemy pretending to be a friend.
Due to the defensive forces of the defending city-state, the six generals know that they could take the city if every one of them attacked at the same time from around the city.
But if only a few attacked and the others retreated then the attackers would be wiped out and the surviving city-states, with their generals and their armies intact, would end up over-powering and enslaving their previous friendly city-states.
No-one could trust any other.
(This has massive parallels with modern day sovereign nations and their playing of the game with weapons, armies/air forces/navies, economics, currency, trade agreements, banks, education, health, wealth, and so on)
The generals have to send a message to the other generals telling them if they’re going to attack or retreat.
The problem is that a general could send a message to the general to his left saying that he’ll attack and send a second message to the general to his right that he will retreat.
Some possible solutions said that there should be two lieutenants to receive the message from the general and that they could check each others message to confirm that they are indeed identical before passing the messages onto the left and right messengers.
However the messengers in turn could change the message from “attack” to “retreat” or vice versa or not deliver the message at all.
Plus the generals, once a message has been sent out as “attack” could turn around and retreat, or vice versa.
I thought to myself, “I bet the folks who thought up this problem are feeling pretty damn smug about themselves.”
However I was a moderator of an assembly community.
I’d translated the DirectX8 C++ COM headers into their x86 assembly equivalent (using techniques built by others far more smarter than me, and with help for some files when DX8.1 was translated), built a PIC micro controller assembler in x86 assembly language, and many other things.
And because I've done six impossible things this morning, why not round it off with creating a solution to the Byzantine Generals Dilemma !
Elitist ego ? What elitist ego ? They’re all amateurs !
Let us begin:
“Ok,” I thought to myself. “let’s start at the beginning. We need a network. What does that look like ?”
The Generals are going to be represented as computers. The servers in the network. The nodes.
The messages are going to be the data travelling between them.
Transactions will be used as the first example of data.
For those reading, hold your hands in front of you - touch the bottom of the palms together with the fingers far apart, thumbs touching each other, twist your elbow and wrists so that the fingers are pointing upwards - slightly curved.
Fingers as Nodes
These are the nodes in the network.
The node where the thumbs touch is your own node.
No node can trust each other.
For this network structure to work, it must work even with every single node actively hostile toward one another.
“Surely the network can trust my node. I’m good ! “, you may say to yourself.
But you would be wrong.
This network is not about you. It must exist even when you don’t.
If there were a hundred nodes then it’d be ninety-nine to one against you.
As far as the network is concerned, there’s ninety-nine nodes that cannot trust you compared to your one.
So accepting that all nodes cannot trust one another, plus they are actively hostile toward one another, we can …
“But hang on ! ”, you say. “What do you mean ‘actively hostile’ ? Surely they’re not all hostile ? ”
Even if most of the time nodes will play nice with one another, the rules of the game must be structured in such a way that they will work even if all participants were actively hostile toward one another .
Because if it still worked with everyone having a go at each other then you would’ve built something that could last for a very long time.
You could build something whereby sovereign nations could no-longer undermine other sovereign nations.
It would be the great equaliser that would allow stronger nations to stop screwing around with weaker nations.
It’s the ultimate golf handicapping system. Everyone could play this game.
Kind of like my moderating style from the assembly days.
So we have these hostile nodes.
It has to be able to work with any type of message or data package. Initially it will be built for electronic cash transactions.
I will type it as "messages (transactions)" below to indicate that the messages are the messages in the Byzantine Generals Dilemma and that the message could be any data whatsoever - "transactions" just being the first. Plus in a roundabout way a message is also a transaction whereby a transaction doesn't have to be only for electronic cash - it's just an indication of what items are being transacted.
We want to send messages (transactions) between them and make sure everyone agrees that the messages (transactions) are correct.
That implies that every single node would have to store an exact copy of all the messages (transactions) and be able to read through them and confirm that they are valid.
And whenever a node receives a message (transaction) it would check it for validity and if it’s ok then that message (transaction) would be passed onto the adjacent nodes.
But how to stop a node changing the message (transaction) contents and sending different results to two adjacent nodes ?
How about taking the possibility of messages (transactions) being able to be changed out of the problem completely ?
We could using private/public keys to sign the messages (transactions) so that they couldn’t be changed.
The owner could sign a message (transaction) with the owners private key and everyone could check its validity with the owners public key, but not be able to change it.
Right. The messaging ( transactions/ data/ etc ) part of the problem is partially solved.
Now how do I solve the generals problem so that they all play nicely with one another ?
If we can make sure all generals (nodes) can get the identical data and that they can all validate that the data is identical and unchanged then the Byzantine Generals Dilemma would be solved.
Data Chunks
It became apparent that every major node on a network would have to store an entire copy of all of the data so that they could verify that the data was correct and hadn’t been modified.
The data would probably end up looking like a list or stack, with each incoming valid message (transaction) placed on top of the previous messages (transactions).
What looks like a stack but hasn’t got the memory restrictions like a normal assembly stack ?
When I was reminiscing about the Amiga 500 days I recalled having to muck about with IFF.
That’s the Interchange File Format.
The basics of it is like this:
In a plain text file there are chunks of data.
Each chunk of data begins with a chunk identifier - four characters that indicate to a program what type of data resides within that chunk (example “WAVE”, “FORM”, “NAME”).
An IFF file can have many data chunks of differing types.
The .AVI (audio/video), .ILBM (bitmap) and .WAV (audio wave) file formats are based upon the IFF.
I thought, “What if one of these data chunks was called ‘MSG ’, ‘DATA’ or ‘TSTN’ (TranSacTioN) ? ”
That might work.
Where would the proof-of-work thing come into play ?
Let’s say we replace the four-character-identifier with a header so that the proof-of-work can be done on it ?
That means the header would now include an identifier for what type of data is included within the chunk, plus a value used to modify the difficulty for generating a hash (the number of zeros needed to prepend the generated hash), a random value which increments as hashes are attempted so that the header data is slightly different for each hash attempt, plus the data itself.
But once a correct hash is generated, that particular node would mint electronic cash to pay for the electricity used.
Remember: The electronic cash is supposed to cover the actual fiat currency costs involved in doing the proof-of-work computations.
As the owner of the node computer is paid by an employer in fiat currency and has paid personal tax on it, and they have used that fiat currency to pay their electricity provider (which in turn pays company, state and value-added or goods&service taxes), then the electronic cash is equivalent to swapping your own money for a soft drink can from a vending machine.
Except, due to the media of this system, you’d be able to go to another vending machine and reenter your soft drink can for a refund in fiat currency again ( minus a restocking fee ) and the vending machine could be anywhere on the planet.
That means an extra message (transaction) would have to be included within the chunks data for the minted electronic cash.
If there must be at least two messages (transactions) within a data chunk - the actual message (transaction) plus the message (transaction) for the node that generates the hash - then maybe there could be more messages (transactions) stored in each data chunk ? How would a bunch of messages (transactions) be stored inside a data chunk ?
I remembered learning about binary space partitioning around 2006.
BSP trees were used to store 3D graphic polygons that were able to be quickly traversed so that a game could decide which scenery to display to the game player.
Quake 3 Arena and Medal of Honour: Allied Assault ( which uses Q3A codebase) used BSP trees for storing the scenery. Wherever the player was looking the tree would be traversed and only the polygons (triangles) that were viewable would be rendered by the graphics chip. Try to think of the players view in a game was like a searchlight beam and whatever the light touches is rendered onto a persons computer screen and everything else is ignored- unseen and not rendered.
“I wonder if I could break the transactions up into a binary space partitioned tree ?”
For those interested, a wee bit of light reading is here: Binary Space Partitioning
A binary space partitioned tree begins at one polygon and uses its surface as a plane to cut throughout the rest of the scene.
This kind of plane: Geometry Plane
Each polygon the plane hits gets sliced in two.
Note: The ‘node’ word used below is used for talking about the nodes in a BSP tree - not nodes in a computer network. Think of nodes as where an actual tree branch splits into two smaller branches.
All the polygons in front of the plane go into the left branch (node) and all the polygons behind the plane go into the right branch (node).
Traversing each branch (node) in turn, a polygon is chosen closest to the middle of the remaining branch (node) scenery and another plane slices the branch (node) in two.
The traversal continues until the entire scenery has been sliced up into left/ right (or up/ down) branches (nodes) and they all end up at the leaves (nodes) which store the actual polygon geometry.
If we use the messages (transactions) as the equivalent of the polygon geometry then we could have a bunch of messages (transactions) in the leaf nodes at the bottom of a tree-like structure inside a data chunk.
Instead of a group of triangle vertices ( polygon geometry ) there would be a single message (transaction).
But how to connect them all up ?
A BSP tree is linked up by having a parent node pointing to the two child nodes, but that’s in memory.
The BSP file that’s stored on a disc drive can be easily modified ( easy as in it’s possible instead of impossible ).
The messages (transactions) within a chunk cannot be allowed to be changed.
What if, instead of memory pointers or offsets pointing parents to children we use one of those crypto hashing functions ?
The bottom-most leaf nodes could use data specifically from their message (transaction) to generate a node hash, right ?
Parent Branch nodes could create a hash using the hashes of their two children hashes.
This would create a tree-like structure within a data chunk where the topmost parent hash could be included within the data chunks proof-of-work header.
This would allow all the messages (transactions) to be locked into a tree that doesn’t allow them to be modified because all parent node hashes would have to be recalculated and the trees root hash would be different from the original generated hash.
And that would mean that the entire proof-of-work hash value would be changed.
The same mechanism used to transfer the transaction data around the network would also be used to send the chunks of data.
If a network node received a changed dataChunk and compared it with one they already held then they’d notice the proof-of-work is different and would know someone was attempting to modify the data.
Bloody [redacted] ! I think this might actually work.
I email (2) to inform him that I was again making progress on the issue.
I explained the idea of having a simplified BSP tree to store the messages (transactions) into a dataChunk and have them all hashed together into a tree with the proof-of-work plus parent hash at the top.
He said, “If I change the transaction stuff to use this method I’m going to have to throw out half my white paper and a third of my code”.
“Well, “ I replied. “You can keep using your current transaction stuff if you want. It can never work in a no-trust environment but if that makes you happy then stay with it. For me - I’m going to take the red pill and continue down this path and see where it gets me. I’m also working on solving the Byzantine Generals Dilemma.”
“Ok. ok”, he said. “I’ll go with what you’ve come up with. But what are you stuffing about with the Byzantine problem ? It’s an impossible crypto puzzle and has nothing to do with electronic cash.”
“It has everything to do with an actual working electronic cash system”, I said. “If it can be solved then we could use a peer-to-peer network for transferring all the data about the place ! Kinda like Napster.”
“Didn’t Napster get shut down because it used a central server ?”, (2) retorted.
“What’s another peer-to-peer network ? IRC ? Tor ?, BitTorrent ?”
“I think we can use IRC to hold the initial node addresses until such time the network is big enough for large permanent nodes to appear”, (2) suggested.
(2) asked, “What’s to stop nodes from sending different dataChunks to other nodes ? If they’re just stacked on top of one-another then they can be swapped in and out at any time. That’s why a third party server is needed for setting the official time on the network for the transactions. Someone could create different transactions and change the time to whatever they want if they can use whatever time they choose.”
I said I’ll think on it some more.
A Kronos Stamp Server
If a third party cannot be used for a time stamp server then we’d have to reevaluate what is meant by time in a computer network.
What if how people think about time is actually wrong and everyone is assuming it to be something that it really isn’t ?
If you hold one fist in front of you to represent time - call it ‘now’ time.
Now Time
If you hold another fist after the first fist you can call it ‘after now’ time.
After Now Time
If you hold another fist before the first fist you can call it ‘before now’ time.
Before Now Time
What we’re actually looking at is a chronological order stamp. The actual time itself is pretty much irrelevant except for when comparing two things in their chronological order.
It should work whether the ‘now’ time is the time shown on your clock/watch right now, or on a date two hundred years from now, or 1253BC ( Tuesday ).
The before/ now/ after can be adjusted accordingly:
after ( Wednesday )
now ( 1253BC Tuesday )
before ( Monday )
And if the time value used is the time shown on your clock, is it the same as the time value shown on your watch ? On the microwave ? DVD player ? Computer ? Phone ? You may find that all the time pieces inside your own home vary by a few seconds or even a few minutes !
In an office almost every single person has a timepiece that has a different time to everyone else - even if it’s only different by a few milliseconds.
Does that mean as you walk from your kitchen ( showing 2:02pm on the wall ) into the lounge ( showing 2:01 on the DVD player ) that’s you’ve just entered a time portal and been magically transported back in time by a minute ?
Of course not. They’re all equally valid time values that humans have made up to be roughly synchronised with one-another.
All that really matters is the range of valid time values used to indicate “This is Now”, “This is Next” or “This was Before”.
If the network nodes all agree on what range of time values should be valid to be “now” or “near now” then each node could use its own time value in any data messages (transactions or dataChunks) and no third party timestamp server would be required.
I email (2) and let him know the time-stamp server issue has been resolved by having the nodes use a Kronos-Stamp.
“What the [redacted] is a ‘Kronos-Stamp’ ? ”, (2) asked.
I give him the explanation I gave to you ( the Reader ) above.
“But what’s this ‘Kronos’ word mean ?”, (2) asked.
“It’s short for “Chronological Order. It’s a Chronological Order Stamp. We don’t need a Time-Stamp any more,” I replied.
“But what’s with the ‘K’ ?”
“To annoy all those folks who’d rather get furious about misspelt words than try and understand the concept that’s being explained. ”
“Well, the crypto community won’t like it spelt like that. We’re going to have to call it a Time-Stamp server because that’s what they understand,” (2) said.
I said, “Time-Stamps are for systems using third party servers. Chronological Order Stamps are for peer-to-peer networks.”
“Ok,” (2) said. “We can use this time thing for making sure the dataChunks are in a chronological order but what stops someone from just changing the time of their computer to be a little earlier than someone else and having their version of the data accepted by everyone else?”
I said I’ll think on it some more.
A Chain of Data Chunks
On another project I was rereading some information about rendering graphical data.
In 3D graphics triangles are used to create any object you see onscreen.
Example of Triangle types:
Triangle Types
Each numbered dot represents a vertex.
The data for the vertices are placed into arrays called buffers.
They’re just a long list of data points which are loaded onto a graphics card and told to be drawn.
Triangle Strip
A triangle strip is a strip of triangles which share the data points from the previous triangle.
Each triangle in the strip is drawn alternating between clockwise/counter-clockwise (indicated by the red and green arrows)
The very first triangle must have all of its vertices added (all three vertices 1,2,3)
Every other triangle in the strip only has to add one more single vertex and reuse the previous two vertices.
The second triangle just adds the data for the vertex (4) and reuses vertices 2 and 3 that’s already embedded inside the strip.
This makes the strip incredibly compact in size for the data it’s meant to represent plus locks each triangle inside the strip and they cannot be accidentally used elsewhere.
If a triangle was wanted to be drawn in a different order then an entirely new triangle strip would have to be created.
A key side affect is that a triangle strip can be set to start drawing at any vertices (except vertices 2 and 3) and the entire strip from that data point onwards will be drawn.
I was staring at this for a long time thinking “This could be used for the electronic cash project somehow, but how exactly ?”
I kept going through the explanation for the triangle strip again and again trying to understand what I was seeing.
Then it dawned on me.
The triangles were the data in a triangle strip.
The chunks were the data in the electronic cash project.
If the triangles were actually the dataChunks then that means the vertices were the proof-of-work header, with the embedded root hash for the messages/ transactions.
The lines in the triangle strip represented the reuse of previous vertex data.
So that means I could reuse the proof-of-work hash from a previous dataChunk and embed that into the next proof-of-work as well !
And just like a triangle strip the dataChunks couldn’t be moved elsewhere unless all the surrounding proof-of-work hashes were redone again.
It reinforces the Kronos Stamp by embedding the previous proof-of-work hash into it so we know what came before now and what was next after previous.
If the entire network was using their cpu power to generate these proof-of-work hashes then a hostile actor would need half the processing power to get a fifty percent chance of generating the proof-of-work hash for a block and modifying the data.
However every second block on average would be generated by an opposing hostile actor and so whatever the fifty percent hostile actor was attempting to do wouldn’t last for very long.
DataChunk Chain
I needed to have some of the math for this looked at to see if I was on the right track.
I email (2) and let him know about this idea of hooking together the dataChunks like a chain so that they couldn’t be modified without redoing the proof-of-work hashing.
He liked the idea of a chain.
I said, “You see how all the appended dataChunk headers reuse the hash from the previous dataChunk header ? Take a look at the very first dataChunk.”
“What’s so special about that” , (2) asks.
“Well,” I say. “The first dataChunk header hasn’t got any previous hashes it can use, so in the beginning it will have to use a made up ‘previous hash’ in its header. In the beginning it has to use a manually create hash. In the beginning… get it?”
“What ?”, (2) asks.
“The very first data chunk is the Genesis dataChunk. In the beginning there is the Genesis dataChunk”, I reply.
He said he likes that idea very much as he’d just started being involved in a church in the past year or so.
I ask him to get the other cryptos he’s in contact with to play around with the numbers and see if this would work.
(2) asked, “Hang on. How would this solve the double-spending problem ?”
I'll stop this story here for now and post a follow-up depending upon its reception.
I guess I've found reddit's posting character limit. 40,000 characters. There was going to be another 10,000 characters in this post however that will have to wait till next time.
Bitcoin Origins - part 3
This is a continuation from the previous reddit post:
Bitcoin Origins
Cheers,
Phil
(Scronty)
vu.hn
submitted by Scronty to btc [link] [comments]

Make Money Easily 850$ to 5k Per Week With Bitcoin Mining FREE BITCOIN ONLY WiFi . Bitcoin Core Network Sync Successful - YouTube Bitcoin Wallet Recovery Full Bitcoin Sync in 2 hours 46 minutes with Agama Wallet ...

Bitcoin developer Pieter Wuille posted a request for review and testing of a large change that he's been working on for BitcoinLitecoin Syncing Headers Best Place To Trade Cryptocurrencies used Bitcoin Core Code but added some changes to address bitcoin core synchronizing with network a number of wie kaufe ich healthineers aktien flaws in Bitcoin.The slow block times happen when Bitcoin loses ... Bitcoin is a decentralized This here to will make you happy with the answer. bitcoin core wallet synchronizing with network slow 2/45 Optional photo title 7 February 2018 What exactly is Bitcoin wallet doing when it says “synchronizing with network” and says ” x blocks remaining” I know a block is an algorithm solving missing transaction equations but I don’t understand what what ... Bitcoin is a decentralized Best🔥 . you can work at home bitcoin core wallet synchronizing with network slow,We gather this here crypto Today MakeUseOf has piqued my interest in Bitcoin and I have decided to give it a go after seeing the number of merchants accepting it as payments increase. I downloaded the official Bitcoin client (wallet) and am now waiting for it to finish syncing. The ... Bitcoin wallet slow Synchronizing with network. 10-Step Guide for BeginnersBitcoin wallet slow synchronizing by sending pages of the service public violence oxides., slow downs bitcoin core synchronizing with network weeksblock is successfully mined and accepted by the Bitcoin network, toHello, I am trying to sync the wallet for more than 10 hours. How long should synchronizing your wallet for the There are two factors to the time to catch up with the network: Migrate from Bitcoin Core Wallet to Why is Bitcoin wallet client peer sync so slow Why is Bitcoin wallet client peer sync so slow (Read 33393 times) Bitcoin is a p2p network after all, My Bitcoin Core wallet has been syncing for If your purpose is first and foremost to help out the ...

[index] [21649] [17078] [28929] [41204] [431] [11766] [145] [20574] [39401] [45826]

Make Money Easily 850$ to 5k Per Week With Bitcoin Mining

Installed Bitcoin on my network. Helping the cause. Will be mining for BTCs. bitcoin qt synchronizing with network bitcoin qt wallet location bitcoin qt import wallet bitcoin qt synchronizing with network slow bitcoin qr bitcoin qt system requirements bitcoin qt trader ... When you are experiencing a slow download and synchronisation of the bitcoin blockchain this little tutorial might be of help. We will disable power nap feature for our system to stop macOS from ... Go to Transaction Accelerator: https://www.viabtc.com/tools/txaccelerator/ View global Unconfirmed Transactions: https://blockchain.info/unconfirmed-transact... bitcoin qt synchronizing with network bitcoin qt wallet location bitcoin qt import wallet bitcoin qt synchronizing with network slow bitcoin qr bitcoin qt system requirements bitcoin qt trader ...

#