A web-based platform that facilitates homework distribution and management between teachers and parents in schools. The system provides secure role-based access, efficient file management, and streamlined communication channels.
Explore the docs »
View Demo
·
Report Bug
·
Request Feature
Table of Contents
The Homework Portal is a case study project for Software Metrics that demonstrates the application of various software quality metrics and best practices. This web-based platform facilitates homework distribution and management between teachers and parents in schools, providing secure role-based access, efficient file management, and streamlined communication channels.
To get a local copy up and running, follow these steps:
- Node.js (v14.0.0 or higher)
- PostgreSQL (v12.0 or higher)
- Git
- Clone the repository
git clone https://github.com/LiamBolt/software-metrics-g8.git cd software-metrics-g8 - Install dependencies
npm install
- Install PostgreSQL from postgresql.org
- Create a new database named 'homework_portal'
- Copy
.env.exampleto.envand update the values - Start the application
npm start
- Clone the repository
git clone https://github.com/LiamBolt/software-metrics-g8.git cd software-metrics-g8 - Install dependencies
npm install
- Install PostgreSQL using Homebrew
brew install postgresql brew services start postgresql
- Create a new database named 'homework_portal'
- Copy
.env.exampleto.envand update the values - Start the application
npm start
- Clone the repository
git clone https://github.com/LiamBolt/software-metrics-g8.git cd software-metrics-g8 - Install dependencies
npm install
- Install PostgreSQL
sudo apt update sudo apt install postgresql postgresql-contrib sudo systemctl start postgresql
- Create a new database named 'homework_portal'
- Copy
.env.exampleto.envand update the values - Start the application
npm start
Create a .env file in the root directory with the following variables:
DB_USER=your_postgres_username
DB_PASSWORD=your_postgres_password
DB_HOST=localhost
DB_PORT=5432
DB_DATABASE=homework_portal
SESSION_SECRET=your_session_secret
PORT=3000
- Three distinct user roles: Admin, Teachers, and Parents
- Secure authentication using bcrypt password hashing
- Protected routes based on user roles
- Admin dashboard for user management
- Teachers can upload homework assignments (PDF format)
- Parents can download assignments for their children
- Organized by grade level (Primary 1-4) and subjects
- Automatic weekly reset of assignments
- Dedicated sections for each subject:
- Mathematics
- English
- Science
- Social Studies
- Grade-specific resource libraries
- Easy navigation between different subjects and grades
- Session-based authentication
- Password encryption
- Input validation
- Protected file uploads
- Role-based middleware
- PostgreSQL for data persistence
- Efficient query optimization
- Structured tables for:
- User management
- Homework uploads
- Subject resources
- Responsive web design
- Intuitive navigation
- Clear user feedback
- Mobile-friendly layout
This project implements various software metrics to track and enhance code quality. Each team member has focused on specific metrics from different lectures.
Goal-Question-Indicator-Metric (Lecture 3) - @Nysonn and @LiamBolt
We applied the GQ(I)M methodology to the Homework Portal to build a data-driven improvement framework for student engagement and grading efficiency:
-
Identify Business Goals
Defined the primary goal: improve assignment submission/grading efficiency and user satisfaction. -
Derive Questions
Brainstormed key questions: average grading turnaround, student login frequency, on-time submission rate, and feature usage patterns. -
Form Subgoals
Mapped questions into subgoals:- MG1: Minimize grading turnaround
- MG2: Increase feedback interaction
- MG3: Maximize on-time submissions
-
Entities & Attributes
Selected measurable entities and attributes (e.g., submission timestamps, login events, module usage, due dates). -
Formalize Measurement Goals
Converted subgoals into formal MG1–MG3 entries with clear purpose, focus, viewpoint, and environment. -
Define Indicators
Established indicators:- I1: Average grading turnaround (hours)
- I2: Average daily logins per student
- I3: Feature usage distribution
- I4: Percentage of on-time submissions
-
Specify Data Elements & Measures
Mapped each indicator to raw data elements and operational measures (e.g.,graded_timestamp – submission_timestampfor turnaround in hours). -
Plan Actions
Outlined ETL and logging tasks: instrument event capture, schedule weekly extracts, and configure reporting queries. -
Implementation Roadmap
Created a 4-week timeline with responsibilities:- Week 1: Logging instrumentation
- Weeks 2–3: ETL jobs & initial dashboards
- Week 4: Stakeholder review & metric refinement
Measurement theory - Lecture 2(KAKYO BRIDGET) - @Kashb-shielah we have implemented how to define the problem, identify scales and forming the empirical relational system, modelling,defining a formal relation system and verifying the results of the system
-
problem definition-what we want to measure: -code size(lines of code)
- code complexity(cyclomatic complexity)
-
identify scales - the scales we are going to use -ratio scale for cyclomatic complexity -also ratio scale for lines of code
-
Empirical relational system -use radon(python package to collect data) for cyclomatic complexity -count the lines of code using a python script to get the code size
building a relational system -Empirical Relational System (E) = {Entities (A), Relations (R), Operations (O)} where: -Entities(A)-> different files in the homework portal project -properties(attributes)->cyclomatic complexity(CC), lines of code(LOC) -Relations(R)-> "More complex than"(CC comparison), "larger size than"(LOC comparison) -Operations(O)-> compare two modules, aggregate averages across the system
-
Modelling- building a formal model -Formal Relational System (F) = {Mathematical Representation of E} ->cyclomatic complexity(CC) = Number of decision points + 1 ->Lines of Code(LOC) = Number of physical lines - if large LOC -> more complexity -> more bugs
Example - Mapping lines of code Entity - python script Attribute - size Scale - Ratio Unit - lines of code(number) Direct/indirect - direct Validation - Check that total project LOC = sum of module LOCs.
We've implemented code size metrics using a custom Python script (metrics.py) that calculates:
-
Lines of Code (LOC) - Measures different aspects of code size:
- Physical LOC: Total number of lines in the file
- Logical LOC: Number of statements (excluding comments and blank lines)
- Comment LOC: Number of comment lines
- Blank LOC: Number of blank lines
-
Halstead Complexity Metrics - Calculates software complexity based on:
- Number of distinct operators (n1)
- Number of distinct operands (n2)
- Total occurrences of operators (N1)
- Total occurrences of operands (N2)
From these basic counts, we derive:
- Program length (N): N1 + N2
- Program vocabulary (n): n1 + n2
- Volume (V): N × log2(n)
- Difficulty (D): (n1/2) × (N2/n2)
- Effort (E): D × V
- Time to implement (T): E/18 seconds
- Number of bugs (B): V/3000
To run the metrics analysis on the project:
python software-metrics/metrics.py --path ./path/to/source/files --format jsonOptions:
--path: Directory containing source files to analyze--format: Output format (json, csv, or terminal)--output: Output file path (optional)--exclude: Directories to exclude (comma-separated)
Demonstrating Software Reliability through Unit Testing (Lecture 9) - @Nysonn
To assess the reliability of our application’s key views—admin-dashboard.ejs and login.ejs—we wrote a suite of Jest/jsdom unit tests that exercise DOM structure, form functionality, and error handling. These tests correspond to the “feature tests” and “regression tests” described in Software Reliability Engineering, where feature tests validate individual units and regression tests ensure fixes remain effective over time :contentReference[oaicite:0]{index=0}:contentReference[oaicite:1]{index=1}.
-
Admin Dashboard
- Scope: Navigation links, form fields, password generation, and table headers.
- Outcome: All but one assertion passed, confirming that the dashboard meets its functional requirements and that failure intensity (i.e., the rate of unexpected behaviors) is low under development conditions :contentReference[oaicite:2]{index=2}:contentReference[oaicite:3]{index=3}.
-
Login Page
- Scope: (Placeholder for login-specific behaviors)
- Outcome: All tests failed, highlighting critical gaps in form validation and event wiring. This elevated failure count in controlled tests predicts higher failure intensity in production, signaling the need for focused corrective action.
-
Failure Intensity (λ):
- Passing tests correlate with lower λ (failures/unit test), indicating that the component is less likely to fail in operation :contentReference[oaicite:4]{index=4}:contentReference[oaicite:5]{index=5}.
- Failing tests for login.ejs reveal a spike in λ, guiding us to areas that most erode reliability.
-
Reliability Growth:
- As defects uncovered by failing tests are fixed and tests subsequently pass, we observe “reliability growth”—a reduction in observed failure frequency over successive test runs :contentReference[oaicite:6]{index=6}:contentReference[oaicite:7]{index=7}.
-
Test-Driven Feedback Loop:
- By integrating unit tests into continuous integration, we capture regressions early, preventing defect injection and ensuring that reliability objectives are met before each release.
Unit tests serve not only as documentation of expected behavior but also as quantitative indicators of software reliability. The contrasting outcomes for admin-dashboard.ejs (high pass rate) versus login.ejs (high failure rate) directly map to reliability metrics, enabling data-driven decisions on where to allocate development and testing effort.
Measuring Structural Complexity Metrics (Lecture 6) - @Hotchapu13 @Precious187
The primary objectives are:
- Enhance code maintainability.
- Minimize technical debt.
- Support sustainable, efficient future development.
To achieve these goals, the following critical questions are defined:
- Q1: Which modules exhibit the highest structural complexity and risk?
- Q2: Where is coupling (interdependencies) most concentrated in the system?
- Q3: Which modules should be prioritized for refactoring?
- Q4: How is structural complexity evolving over time?
The questions are mapped to more specific subgoals:
- SG1: Reduce module-level structural complexity.
- SG2: Decrease excessive coupling between source modules.
- SG3: Identify, monitor, and prioritize complex modules for targeted maintenance.
The system entities and attributes selected for measurement:
| Entity | Attribute |
|---|---|
| JavaScript files (.js) | Module dependencies, LOC |
| EJS templates (.ejs) | Template includes, linked CSS |
| CSS files (.css) | Links from templates, LOC |
Measurement goals formalized for operational tracking:
- MG1: Quantitatively assess and monitor module complexity using objective metrics.
- MG2: Track and manage module coupling to ensure system modularity.
- MG3: Establish baselines and monitor trends in module risk and complexity.
To monitor progress, the following indicators are defined:
| Indicator | Description |
|---|---|
| I1: Information Flow Complexity (IFC) | Composite score based on size and dependency interactions. |
| I2: Fan-in | Number of modules that depend on the current module. |
| I3: Fan-out | Number of modules the current module depends on. |
| I4: Lines of Code (LOC) | Non-comment, non-blank source lines per module. |
| I5: Complexity Threshold Violations | Percentage of modules exceeding IFC thresholds. |
Each indicator is operationalized as follows:
| Metric | Formula/Definition |
|---|---|
| IFC | (\text{IFC} = \text{LOC} \times (\text{Fan-in} \times \text{Fan-out})^2) |
| Fan-in | Count of modules importing/including the module. |
| Fan-out | Count of modules the module imports/includes. |
| LOC | Count of non-empty, non-comment lines per module. |
IFC follows the Henry and Kafura information flow complexity metric.
The ETL (Extract, Transform, Load) and analysis flow:
- Step 1: Use
info_flow_complexity.pyto scan the project directory. - Step 2: Extract dependencies and calculate LOC for
.js,.ejs, and.cssfiles. - Step 3: Calculate fan-in, fan-out, and IFC values for each module.
- Step 4: Save results into:
- CSV (
information_flow_metrics.csv) - JSON (
information_flow_detailed.json)
- CSV (
- Step 5: Analyze distribution of complexity across modules.
- Step 6: Visualize hotspots and identify critical modules.
| Week | Activity |
|---|---|
| Week 7 | Develop and validate extraction script (info_flow_complexity.py). |
| Week 11 | Analyze complexity trends, set actionable thresholds for IFC. |
This framework provides a repeatable, data-driven method to systematically measure, track, and improve structural complexity and maintainability using Information Flow Complexity and related metrics
We track these metrics weekly to monitor code quality and complexity over time.
COCOMO II Application Composition Model (Lecture 7) (Lecture 7) - @LiamBolt
We leveraged the COCOMO II Application Composition model to estimate development effort for the Homework Portal:
-
Object-Point Counting
Inventoried all 63 EJS view screens (no reports or 3GL components) per standard OP procedure. -
Complexity Classification
Classified every screen as Simple (single view, minimal table interactions) with weight = 1. -
Object-Point Calculation
− Unadjusted OP = 63 × 1 = 63
− Adjusted for 0% reuse: NOP = 63 -
Productivity & Effort Estimation
Chose a nominal productivity rate (13 NOP/PM)
Estimated effort ≈ 63 / 13 ≈ 4.85 person-months -
Assumptions & Notes
– Screens only (no reports/3GL)
– 0% reuse assumed
– Productivity rate based on team/tool maturity
This project includes a performance testing script (Performance.py) that uses the Locust library to simulate user behavior and measure key performance metrics of the web application.
Key Features The Locust script is designed to:
Simulate user interactions with the following endpoints: / (Home page) /about (About page) /contact (Contact page) Measure the following performance metrics: Failure Count: Number of failed requests. Response Time Percentiles and Averages: Includes 50th, 90th, and 95th percentiles. Min and Max Response Time: Fastest and slowest response times. Throughput: Requests per second (RPS). Error Rate: Percentage of failed requests. Prerequisites Python 3.7 or higher Locust installed: How to Run the Test Ensure your web application is running and accessible at the specified host URL in the script.
As part of Software Test Metrics (Lecture 10), we applied black box testing with test case definition to the Homework Portal’s Role-Based Access Control feature, specifically the login functionality. We:
- Defined three test cases to verify the login process for Teachers, testing valid credentials, invalid passwords, and empty fields.
- Conducted manual black box testing to ensure the feature meets requirements without inspecting internal code.
- Documented the test cases, results, and methodology in
TESTING.md.
This contribution enhances the project’s quality assurance by validating a critical feature. See TESTING.md for details.
- Cohesion & Coupling (Lecture 2) - @Kashb-shielah
- Goal-Question-Indicator-Metric (Lecture 3) - @Nysonn and @LiamBolt
- Cyclomatic Complexity (Lecture 4) - @Catherine-Arinaitwe722 and @enockgeek
- Object-Oriented Metrics (Lecture 6) - @Precious187
- COCOMO II Application Composition Model (Lecture 7) - @LiamBolt
- Testing Metrics (Lecture 8) - @Catherine-Arinaitwe722
- Reliability Metrics Using Unit Tests (Lecture 9) - @Nysonn
- Black Box Testing for Login Functionality (Lecture 10) - @enockgeek
- Quality Models (Lecture 11) - @Hotchapu13
- Implement role-based authentication
- Create homework upload/download functionality
- Develop subject resource libraries
- Implement software metrics tracking
- Add real-time notification system
- Implement gradebook feature
- Add parent-teacher messaging system
See the open issues for a full list of proposed features and known issues.
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Distributed under the MIT License. See LICENSE.txt for more information.
- @LiamBolt - Lectures 3 & 7
- @Kashb-shielah - Lecture 2
- @Nysonn - Lectures 3 & 9
- @Catherine-Arinaitwe722 - Lectures 4 & 8
- @enockgeek - Lectures 4 & 10
- @thefr3spirit - Lecture 5
- @Precious187 - Lecture 6
- @Hotchapu13 - Lecture 11
- Our Software Metrics Dr. @kimrichies for providing the materials and mentorship