Learn how to build your own modules
httpx
to find web servers returning a 403 response. Next, it could brute force these servers for paths using feroxbuster
to find any paths that return a 200 status code. You might also include a batching pattern (file-splitter
, generate-line-batches
, and batch-output
) to distribute the feroxbuster
execution across multiple machines.
Enable Module Input Configuration
toggle.
Inputs can be required or optional with default values. For this example, make the list of hosts a required input and the wordlist optional. Be sure to give them appropriate names and descriptions.
Modules
section or search for it using the search bar.
Drag it into the workflow, connect the input(s) and outputs as needed and run it.
file-splitter
, batch-output
, and generate-line-batches
(or its variant, generate-number-of-batches
), as shown in the example workflow and in the documentation
Here’s a quick reminder of how these nodes work:
generate-line-batches
: This node calculates line ranges to extract from the input file for each iteration based on the batch size you set. For instance, if you set a batch size of 2 for a file with 10 lines, it will create 5 batches of 2 lines each.
generate-number-of-batches
: This node works similarly but allows you to specify the number of batches to create rather than the length of each batch.batch-output
: This node extracts the actual lines corresponding to the calculated ranges from the input file.file-splitter
: This node processes each chunk of the file created by batch-output
, spinning up new iterations of the connected nodes. These iterations are then distributed among the available machines to run in parallel.generate-line-batches
node with a batch size of 500, and you run it on a file with 1000 lines using 50 machines, the workflow will split the input into only 2 chunks of 500 lines each. This means only 2 machines will be used, while the other 48 machines sit idle. To fix this, you would need to manually adjust the batch size to a smaller number, like 20, to use all your available machines effectively.
To solve this problem, we’ve introduced a new script, called batch-ranges
, that calculates batch sizes automatically. Instead of you manually setting the batch size, the script considers the number of machines and the number inputs to determine the optimal batch size for you.
By default, batch-ranges
calculates the batch size by simply dividing the number of inputs by the number of machines, aiming to distribute the workload evenly. However, this method might not always work perfectly. If there are too few machines or too many inputs, the batch size could become too large, leading to performance issues or out-of-memory errors.
To handle this, you can set two limits for batch-ranges
using Python constants at the start of the file:
MAX_BATCH_SIZE
: The maximum number of lines per batch. Set this number based on what’s safe and manageable for the tools and scripts in your workflow to handle without performance or memory problems.MIN_BATCH_SIZE
: The minimum number of lines per batch. The default value of 1 is generally safe, but if a node has significant overhead (like pulling large amounts of data from an external source or querying an API that supports batching), you might want to increase this number to make sure each iteration has enough inputs and avoid unnecessary iterations.