Testing

  • Scalability indicators
    • Number of hashes crawled per second per peer versus number of peers
    • Number of downloaded bytes per second versus number of peers
  • Performance indicators
    • Number of hashes crawled per second versus different CPU loads/platforms
    • Throughput of a peer versus number of crawled job queues (to determine the optimal number of crawl job queues) per platform (differentiate using agent attributes).
  • Node failure
    • If automated, this may require adding data entry points in the API that are only used for testing.
    • Add test data, check that it has been added and has propagated throughout the neighbourhood.
    • Take an agent offline (check that it has actually gone down and is inaccessible) and verify that all the data appears to be working.
    • Pull data manually from each data store (check there are no errors as a result) on the agent, and verify that the data is still retrievable from the system.
    • Bring the downed node back online. The data that belongs on this node (hopefully) begins to flow back into the node.
    • After a while, pull the data from the agent to check that data that was sent to its neighbourhood when it was down is stored correctly.
  • Predictive analysis
    • Test for false negatives and false positives of the various classifiers with unlabelled traffic data
 
 
  • Last modified: 2019/12/13 15:35