Short and simple answer to my initial problem regarding downloading the blockchain: You might need better hardware! 🙂
In summary bitcoin-qt runs out of the box.
So do not fumble around. Unless you really need to.
Pruning saves disk space but increases disk I/O.
Increasing dbcache can help but not so much for pruning.
Both can be done in config-window. Thus, no need to change bitcoin.conf.
Plus, … I haven’t mentioned it lately, have I? … don’t use USB-Sticks! 🙂
Having that out of my way let’s dive into my “real” question.
If we look at it from a queueing theory point of view we have three fields to consider.
- Input that are the nodes you are feasting on.
- Processing where your hardware and configuration comes into play.
- Output writing to disk in our case.
Input side
At first glance it look like I had problems connecting to responsive nodes.
Looking deeper into it, I found the sending side was never really an issue.
The sending behaviour of nodes in average over a long period made me distinguish three main types.
A few are sending MB, some a few KB and a lot just 150 Bytes then drop out
Over time responsive nodes lower their data rate.
Data is usually coming in bulks. All nodes send in parallel many MB/s and then stop for several minutes. While my CPU and disk are constantly busy. So it looks like they are filling one input queue.
This is generally a usual behaviour for queued systems. They are pumping. That’s why you have buffers.
Looks like increasing buffers will not help on my machine, since there is already plenty of headroom.
Sending at high data rates and slowing down over time makes sense for the nodes to distribute load on other nodes. From a clients view this is a preferable behaviour too.
Although my view as a user is bitcoin-qt can improve its dropping strategy, to not bother nodes who did their fair share and focus more on still very responsive nodes.
Sending KB even somewhat erratically at low data rates isn’t really helpful.
Since in a free network you can’t tell a node what to do, clients need a strategy to drop those nodes early.
Why do some nodes just show up to say hello and leave 150 behind?
Probably a kind of handshake. But do we actually need to hold them for a while?
In short: Yes, technically there is room for improvement. But is it worth the effort?
Processing side
I’d say everything is fine. Neither memory nor CPU are problematic.
Output side
Disk I/O is an issue, at least for me.
As Pieter pointed out pruning prevents optimal caching.
I’m reluctant to judge on this topic without thorough understanding.
But my first approach would be reducing the number of files involved.
Many thanks to Pieter and Murch for their quick response. Helped a lot!
Feedback and corrections highly appreciated!











