Here is the code:
<pre>import concurrent.futures as cf
from cryptotools.BTC.HD import check, WORDS
N_THREADS = 192
result = []
def doWork(data):
for line in data:
stripped_line = line.strip()
for word in WORDS:
mnemonic = stripped_line.format(x=word)
if check(mnemonic):
result.append(mnemonic)
m_input = open("input.txt", "r")
lines = [line for line in m_input]
m_data= { i: [] for i in range(0, N_THREADS)}
for l, n in zip(lines, range(0, len(lines))):
m_data[n%N_THREADS].append(l)
'''
If you have to trim the number of threads uncomment these lines
m_data= { k:v for k, v in m_data.items() if len(v) != 0}
N_THREADS = N_THREADS if len(m_data) > N_THREADS else len(m_data)
if(N_THREADS == 0):
exit()
'''
with cf.ThreadPoolExecutor(max_workers=N_THREADS) as tp:
for d in m_data.keys():
tp.submit(doWork,m_data[d])
output = open("print.txt", "w")
for item in result:
output.write(f"{item}\n")
output.close()
The text file being processed looks something likes but with 100k-300k lines
gloom document {x} stomach uncover peasant sock minor decide special roast rural
happy seven {x} gown rally tennis yard patrol confirm actress pledge luggage
tattoo time {x} other horn motor symbol dice update outer fiction sign
govern wire {x} pill valid matter tomato scheme girl garbage action pulp
What I have tried:
The code above is fast just need it to work faster, but have no idea how to implement numba and numpy ove the functions