COMP11017 Knowledge Base
Comprehensive reference for Research Design & Methods module. Select a topic from the sidebar or click below.
Module Overview
Module Identity
- Code: COMP11017 Research Design & Methods
- School: Computing, Engineering, and Physical Sciences (CEPS)
- Coordinator: Dr Daune West (PhD in Information Systems, specialist in Action Research and Appreciative Inquiry)
- University: University of the West of Scotland (UWS)
Module Purpose
- Introduces research methods - knowledge and some experience including critical evaluation skills
- Supports students in undertaking an MSc-level project
- Assessment brings research methods and project specification together
- The module and its assessment underpins the MSc Project (COMP11024)
- Students produce a detailed research proposal suitable for an MSc-level project which provides basis for MSc project
Learning Outcomes
- LO1: Critically evaluate, identify and consider the practical use of approaches to research appropriate to their subject discipline
- LO2: Review and evaluate critically arguments, research approaches, evidence and conclusions in the academic and research literature of their subject discipline
- LO3: Propose, construct, plan and defend a suitable research proposal for a MSc level postgraduate research project
Assessment Structure
- Assessment 1: Verbal Presentation (20%)
- 5-minute recorded presentation (MP4 via Turnitin)
- Deadline: 31 October 2025 (by 4pm)
- Feedback: 21 November 2025
- Formative purpose - get thinking/planning for feedback to develop Written Specification
- Record using ScreenPal or similar; use PowerPoint slides; do NOT merely read slides
- Follow marking scheme headings to structure presentation
- Generative AI policy: Type 1 (Restricted) - only spell checkers, grammar check permitted
- Assessment 2: Written Specification (80%)
- Extended research specification for MSc-level project
- Word count: 2500-3500 words
- Deadline: 10 December 2025 (by 4pm)
- Feedback: 19 January 2026
- Must use provided template (.docx format, NOT PDF)
- Generative AI policy: Type 1 (Restricted)
- Late penalty: 10 marks if up to 1 week late; 0 if more than 1 week late
Assessment Quick Help Guide (Which Lectures Map to Which Sections)
| Assessment Section | Relevant Lectures |
|---|---|
| Title | Week 3 |
| Abstract | Week 11 |
| Aims | Weeks 3, 4 |
| Objectives | Weeks 3, 4 |
| Justification | Weeks 3, 4, 1 |
| Literature review | Weeks 5, 9 |
| Methodology and Ethics | Weeks 7, 8, 10 |
| Work plan | Week 5 |
| References and presentation | Moodle referencing guides |
12-Week Module Structure
- Weeks 1-2: Introduction, research topic exploration, project ideas
- Week 3: General principles of research, method/methodology/technique
- Week 4: Project specification, coursework overview
- Week 5: Project planning, literature review
- Week 6: Presentation guidance, planning details, literature overview
- Week 7: Qualitative research (Action Research, Hermeneutics, Phenomenology)
- Weeks 8-10: Further methodology, ethics (to be covered)
- Week 11: Abstract writing (to be covered)
- Week 12: Final preparations
Core Textbook
- Oates, B.J. (2006) Researching Information Systems and Computing, Sage: London - The primary text
- Supplementary: Cornford & Smithson (2006), Wisker (2008, 2009), Cresswell (2014), Thiel (2014)
- SAGE Research Methods repository
Key Module Philosophies (from Lecturer)
- "To research is to fumble" - through uncertainty, complexity, conflicting ideas
- Module is "front-heavy" - depends on student responsibility and action
- NOT about the research topic itself but about HOW to undertake research
- Focus on research DESIGN not just the topic
- "We are guides to help you along your journey"
- Importance of evidence-based computing (Oates, 2012)
- Must be critical/evaluative, systemic, systematic, aware, rigorous
- Talking and debate are essential learning tools
Linked Module: MSc Project (COMP11024)
- Credits: 60 (SCQF Level 11)
- Assessment: Report 60% + Process 20% + Viva 20%
- Duration: 600 learning hours; FT = 15 weeks, PT = 30 weeks
- Prerequisite: COMP11017
- Coordinator: Daune West
- BCS Accredited
Critical Warnings from Lecturer
- Project must include technical practical work - not sufficient to only do literature review
- Project must be appropriate to programme of study (e.g., AI student must do AI project)
- Module is about Research Design & Methods - marks are for DESIGN, not topic knowledge
- If speaking/writing only about the topic (technology, background) - stop and refocus on research design
- Do not use the word "leverage" (lecturer dislikes it)
- Avoid "AI voice" in writing
- Generative AI restricted to Type 1 only (spell check, grammar check)
Assessment 1: Verbal Presentation
Key Details
- Weight: 20% of module mark
- Format: Recorded MP4 presentation (not live)
- Duration: Maximum 5 minutes
- Deadline: 31 October 2025, 4pm
- Submission: Turnitin via Aula (check file size limits)
- Late penalty: -10 marks up to 1 week; 0 after 1 week
- AI Policy: Type 1 Restricted (only spell/grammar check)
Purpose
Formative - gets students thinking and planning so feedback can improve the Written Specification. Covers LO1, LO2, LO3 in simplified form.
Technical Requirements
- Use ScreenPal or similar free recording software
- Save as .mp4 file
- Use PowerPoint slides - talk TO them, do NOT read them
- Check sound is clear before submitting
- Must fit within Turnitin maximum file size
Required Sections and Marking Scheme
1. Title (10%)
What to do: Explain what you propose to do. Include a verb (investigate, identify, design, develop, compare, evaluate). May include research question. Must NOT look like an essay title.
| Grade | Criteria |
|---|---|
| Poor (0-2) | Topic rather than research project title; insufficient scoping |
| Insufficient (3-4) | Some scope indication but missing process/activity; lacks output |
| Good (5-6) | Clear scope and process indicated |
| Excellent (7-10) | Clear scope; Research Question; verb indicating process; indication of methodology; output declared |
2. Problem (10%)
What to do: State the issue/problem within current situation you are addressing. What is the technical challenge? Provide evidence.
| Grade | Criteria |
|---|---|
| Poor (0-2) | No problem/issue identified; no indication of research project |
| Insufficient (3-4) | Poorly articulated problem; lack of evidence; not carried through rest of proposal |
| Good (5-6) | Problem/issue identified with some supporting evidence |
| Excellent (7-10) | Problem with supporting evidence; clear use to drive research process/design |
3. Context: Literature (20%)
What to do: Provide references and brief overview demonstrating awareness of study context. Harvard style. About 6 well-chosen, up-to-date references.
| Grade | Criteria |
|---|---|
| Poor (0-2) | Lack of relevant literature; inaccurate/fabricated refs; only URLs |
| Insufficient (3-4) | Some refs but incorrectly presented (not Harvard); no mention of how they inform study |
| Good (5-6) | Refs correctly presented (some errors); some indication of how they inform study |
| Excellent (7-10) | Well-selected refs, correctly presented; clear use to support development of proposed study |
4. Data (10%)
What to do: Explain what data YOU will generate/collect as output of primary research (NOT secondary/input data). This is unique data YOUR work produces. Explain how this data enables answering research question.
CRITICAL DISTINCTION: Data here means OUTPUT data generated BY your research process, NOT input datasets you use. If using existing datasets as input, that is NOT what this section is about.
| Grade | Criteria |
|---|---|
| Poor (0-2) | No/little indication of data to be collected |
| Insufficient (3-4) | Describes input/secondary data instead of data generated by project; or only requirements data |
| Good (5-6) | Data generated by primary research described; fairly simple; limited link to RQ |
| Excellent (7-10) | Detailed description of data generated by primary research; clear link to RQ/problem |
5. Approach to Data Collection (10%)
What to do: Explain research process. Include tasks needed to generate the data. Present clear, logical, connected series of steps/activities. Not too much detail.
| Grade | Criteria |
|---|---|
| Poor (0-2) | No/little description |
| Insufficient (3-4) | Discussed but not appropriate to primary data |
| Good (5-6) | Described in general/simplistic way; implementation somewhat vague |
| Excellent (7-10) | Detailed, specific to primary data; clear how approach will be implemented |
6. Ethical and Practical Considerations (10%)
What to do: Consider if activities require human participants or use sensitive/private data. Consider practical issues: software/hardware availability, skill development, experimental design problems, accuracy, subjective variables.
| Grade | Criteria |
|---|---|
| Poor (0-2) | No consideration; states no ethical approval needed when study clearly involves human participants |
| Insufficient (3-4) | Ethics mentioned but not explained in context; practical considerations not specific |
| Good (5-6) | Discussed but lacking detail; not clearly linked to project process |
| Excellent (7-10) | Clearly defined and linked to project process |
7. Evaluation (10%)
What to do: How will you evaluate project results/output? What criteria/standards/benchmarks? How will you measure improvement? Criteria often found in literature.
| Grade | Criteria |
|---|---|
| Poor (0-2) | No/little mention of evaluation |
| Insufficient (3-4) | Mentioned but not targeted to project process |
| Good (5-6) | Some appropriate evaluation explored; lack of detail on criteria |
| Excellent (7-10) | Clear project-specific evaluation process; details of criteria to be used |
8. Planning (10%)
What to do: Present 15-week plan (FT) or 30-week plan (PT). MSc project = 600 hours. Use weeks not dates. Do NOT use generic Google images plan. Make it PROJECT-SPECIFIC using activities from approach section.
| Grade | Criteria |
|---|---|
| Poor (0-2) | No attempt; generic plan with no relationship to project activities |
| Insufficient (3-4) | Limited description; generic headings with small attempt at specificity |
| Good (5-6) | Project-specific activities against timeline; limited detail, high-level |
| Excellent (7-10) | Detailed breakdown of project-specific activities against 15-week timeline |
9. Communication (10%)
What to do: This is assessed by markers - clear speech, well-paced, good structure, within 5 minutes.
| Grade | Criteria |
|---|---|
| Poor (0-2) | Difficult to hear; lack of slides or voice |
| Insufficient (3-4) | Over/under time by 2+ mins; reading slides; poor flow/structure |
| Good (5-6) | Clearly spoken, well structured; under/overtime; some reading of slides |
| Excellent (7-10) | Clearly spoken, well paced, to time; easy to follow; well-presented slides |
Critical Reminders
- Must include TECHNICAL PRACTICAL work relevant to programme
- Must be appropriate to programme of study (AI student = AI project)
- Focus on RESEARCH DESIGN not just the topic
- Follow marking scheme headings for structure
- Do NOT copy the plan from the example presentation (was poorly presented)
- Harvard referencing style required
Assessment 2: Written Specification
Key Details
- Weight: 80% of module mark
- Format: Written document using provided .docx template
- Word count: 2500-3500 words
- Deadline: 10 December 2025, 4pm
- Submission: Turnitin via Aula as .docx (NOT PDF - to allow marker annotation)
- Late penalty: -10 marks up to 1 week; not accepted after 1 week
- AI Policy: Type 1 Restricted (only spell/grammar check)
- Referencing: Harvard (Cite Them Right - CTR)
Purpose
Extended research specification = "blueprint" for MSc project. Like an architect's drawing - details structure and process so someone else could pick it up and implement it. Includes preliminary literature review. Purpose is to demonstrate ability to conceive and specify MSc-level research project - NOT to produce an essay about the topic.
Required Structure and Mark Allocation
1. Title (5 marks)
A SHORT description indicating clearly the question the project will investigate.
2. Abstract (5 marks)
- Summary of problem, core problems, how addressed
- Include up to 5 keywords/phrases at bottom
- Maximum 300 words (include word count in document)
Rubric (Title + Abstract combined = 10%):
| Grade | Criteria |
|---|---|
| Poor (0-2) | Topic not research title; abstract lacks purpose/method/results overview; no practical IT work indication |
| Insufficient (3-4) | Some scope in title but no clear process/RQ; abstract lacks design detail; standard approach without contextualisation |
| Good (5-6) | Clear scope and process in title; abstract summarises purpose, process, output; keywords |
| Excellent (7-10) | Detailed scope/process; abstract with clear research design description, output/results, evaluation criteria; keywords |
3. Aims (5 marks)
General statements on intent and direction of research.
4. Objectives (10 marks)
Clear, measurable statements of intended outcomes. What you will DO to answer the question and HOW (in relative detail).
5. Justification (5 marks)
Rationale showing gaps in current knowledge and how results might be used.
Aims + Objectives + Justification together = about 1 page
Rubric (Aims + Objectives + Justification = 20%):
| Grade | Criteria |
|---|---|
| Poor (0-5) | Lack of joined-up aims/objectives; no practical IT work indication |
| Insufficient (6-9) | Aims/objectives provided but lack structure/hierarchy; justification for topic not project |
| Good (10-13) | Clearly stated aims/objectives; justification provided; some RQ reference |
| Excellent (14-20) | Well-structured relationship between aims/objectives; evidence-based justification; clear direction for research design; easy to see how work satisfies RQ |
6. Review of Literature (20 marks)
- History of problem with key sources and critical appraisal
- Maximum 2000 words (include word count in document)
Rubric (20%):
| Grade | Criteria |
|---|---|
| Poor (0-5) | Little/no discussion; inappropriate/fabricated/URL-only refs |
| Insufficient (6-9) | Some discussion but weak/outdated refs, not Harvard; descriptive not critical; "dead list" approach |
| Good (10-13) | Relevant literature with mostly correct refs; weak critical evaluation (descriptive); some indication of how it informs study |
| Excellent (14-20) | Well presented and critically evaluated; well-selected correctly-presented refs; clear indication of how literature underpins/informs study |
7. Methodology (25 marks)
- Explanation and justification of approach/methodology proposed
- Nature of data expected to collect
- Who will be involved and how you will collect it
- Selection process of organisations/groups/individuals
- Analytical tools to be used (brief discussion)
- Ethical issues identification
- Maximum 1000 words (include word count in document)
Rubric (25%):
| Grade | Criteria |
|---|---|
| Poor (0-6) | No/little research design/methodology; no practical IT work |
| Insufficient (7-12) | Minimal; standard approach (e.g., ML process) without contextualisation; no link to aims/objectives; no ethics consideration |
| Good (13-18) | Clearly structured, corresponds to aims/objectives; ethics considered; some comprehensive joined-up approach |
| Excellent (18-25) | Well-structured, well-contextualised, clearly operationalising aims/objectives; all research process aspects addressed including lit review role; evaluation/validation clearly stated; comprehensive, joined-up approach |
8. Work Plan (10 marks)
- Timetable for completion: 15 weeks (FT) or 30 weeks (PT)
- Any appropriate method but must be legible and integral to report
Rubric (10%):
| Grade | Criteria |
|---|---|
| Poor (0-2) | No attempt; generic plan unrelated to project |
| Insufficient (3-4) | Limited; generic headings; no clear link to aims/objectives/methodology |
| Good (5-6) | Project-specific activities; limited detail; some link to aims/objectives/methodology |
| Excellent (7-10) | Detailed breakdown; clear relationship to aims/objectives and methodology |
9. References (10 marks)
Harvard style citation and referencing.
Rubric (10%):
| Grade | Criteria |
|---|---|
| Poor (0-2) | No list or incorrect/incomplete (URL lists); fabricated refs; poor figures; poor formatting; no template use |
| Insufficient (3-4) | Careless referencing; citation-reference mismatch; poor signposting; untidy; poor template use |
| Good (5-6) | Mostly complete, appropriate, correct; well-presented figures; template used |
| Excellent (7-10) | All Harvard-correct; excellent presentation using template |
10. Overall Presentation (5 marks)
Quality of writing, arguments, continuity, contextualisation, evidence for running research.
Rubric (5%):
| Grade | Criteria |
|---|---|
| Poor (1) | English difficult/impossible to follow |
| Insufficient (2) | Poorly written; logical jumps; insufficient for someone else to implement |
| Good (3) | Generally well-structured; mostly sufficient for someone else to implement (may need clarification) |
| Excellent (4-5) | Well written, clearly structured; comprehensive connectivity; clear logical flow; implementable without further information |
Template Structure (from provided template)
The official template includes these sections with marker comment boxes:
- Title, Abstract → /10
- Aims, Objectives, Justification → /20
- Literature Review → /20
- Research Design/Methodology → /25
- Work Plan → /10
- References → /10
- General English/Logic → /5
- Overall → /100
Critical Success Factors
- Blueprint test: Could someone else pick up this specification and implement the project?
- Not an essay: Demonstrate research DESIGN ability, not topic knowledge
- Joined-up approach: All sections must connect logically (title → problem → aims → objectives → methodology → plan)
- Critical literature review: NOT a "dead list" - critical evaluation, themes, arguments
- Contextualised methodology: Don't just state "ML approach" - explain specifically how YOUR project implements it
- Project-specific plan: Derived from YOUR objectives and methodology activities
- Harvard referencing: Consistent, complete, accurate throughout
- Word counts: Include word counts where specified (abstract, lit review, methodology)
- Use the template: Marks allocated for proper template use
Research Principles & Concepts
Why Research?
- Find out why/how/when
- Natural curiosity and better understanding
- Test ideas/products
- Financial gain
- Diagnose problems → invent and test solutions
- Get a qualification
- Intellectual challenge
- Key: Be clear on reasons for research and maintain focus throughout
Research Direction
- Phrase as a research question (e.g., "What use are SMEs making of Web 2.0 technologies?")
- Consider: definitions, scope, access, data collection, sample size, validity of outcomes
Core Research Principles
1. Rigor
- Careful planning, execution, and analysis
- Decisions must be clear and well-argued
2. Evidence
- Must be provided to illustrate and support all arguments and contentions
3. Transparency
- Allows inspection of research process and outputs by others
- Enables others to assess the value of the research
4. Repeatability
- Some research: vital for others to verify (scientific experimentation)
- Other research: involves unique situations that cannot be repeated (e.g., action research)
5. Measurement
- Need to "measure" research output (success, problem-solving, comparisons)
- Closely linked to repeatability
6. Ethics
- Follow established ethical guidelines (university committees and guidelines)
- Professional reporting (avoid plagiarism)
7. Critical Reflection and Evaluation
- Critique of existing research AND your own research
- Learning from experience
- Interpreting data and linking discoveries to existing knowledge
8. Context
- All research must be placed within existing body of knowledge
- Literature review provides this context
Summary: Research must be justifiable and defensible in its conception, planning, execution, and presentation.
Method vs Methodology vs Technique
Method
- Systematic process, step-by-step, known in advance
- Like a "recipe"
Methodology
- Bigger concept than method
- Involves the theoretical underpinning of the approach to research
- Examples: Science (quantitative), Action Research (qualitative)
Technique
- The working "tools" within methods/methodologies
- Examples: interviews, surveys, questionnaires, experiments, observations, case studies
Quantitative vs Qualitative (Simplified)
Quantitative (Objective worldview)
- Methodology and tools: scientific, statistical, logical
- Associated ideas: positivism, realism, empiricism, nomothetic, determinism
- Example: Science - hypothesis testing, repeatability, measurement, objective observation, control experiments
- Clear, well-documented agreed theoretical foundation
Qualitative (Subjective worldview)
- Methodology and tools: phenomenology, ethnography, action research
- Associated ideas: anti-positivism, constructivism, interpretivism, phenomenology, ideographic, non-determinism
- Example: Action Research - reality as social construction, negotiation, context importance, researcher as participant, unique situations, non-repeatability
Key Insight
- Same tools may be used in both approaches but in very different ways
- Start by asking: "How do I think about the World?" - Objectively or Subjectively?
Research Topics as Puzzles (from Sage chapter)
Five types of research puzzles:
- Developmental puzzles: How did X develop/change over time?
- Mechanical puzzles: How does X work? What are its components?
- Correlational puzzles: What is the relationship between X and Y?
- Causal puzzles: What causes X? Why does X happen?
- Essence puzzles: What is X? What does it mean?
Topic Selection Advice (from Sage)
- Start early - general to particular
- Avoid overly politicized topics
- Be cautious with personal issues
- Avoid the "line of least resistance"
- "Air" the topic - discuss with others
- Consider: initial literature search viability, methodology feasibility, validity/reliability
- Features of good topics: interest, feasibility, researchability, worthwhile contribution
Research Paradigms (from Fossey et al.)
Three main paradigms:
1. Empirico-Analytical (Positivist)
- Objective reality exists independently of human perception
- Quantitative methods predominate
- Seeks to explain, predict, and control
2. Interpretive
- Reality is socially constructed
- Understanding through meaning and interpretation
- Qualitative methods predominate
3. Critical
- Reality shaped by social, political, cultural, economic, ethnic, gender values
- Research as instrument of change
- Mixed methods common
Quality Evaluation Criteria
Quantitative Criteria
- Validity: Does it measure what it claims to?
- Generalizability: Can results apply to wider population?
- Reliability: Would same results occur if repeated?
- Objectivity: Is researcher bias minimised?
Qualitative Criteria (equivalent terms)
- Credibility (replaces validity)
- Transferability (replaces generalizability)
- Dependability (replaces reliability)
- Reflexivity (replaces objectivity)
Methodological Rigour (Fossey et al.)
- Congruence: Logical consistency between all research elements
- Responsiveness: Sensitivity to context and participants
- Appropriateness: Suitable methods for research questions
- Adequacy: Sufficient depth and breadth
- Transparency: Clear documentation of process
Interpretive Rigour (Fossey et al.)
- Authenticity: Faithfulness to participants' perspectives
- Coherence: Logical narrative from data to conclusions
- Reciprocity: Mutual relationship between researcher and participants
- Typicality: Representativeness of findings
- Permeability: Openness of researcher's positioning
Evidence-Based Computing
- "Empirical assessment and evaluation of computer products and development processes, so that we can have evidence-based computing" (Oates, 2012, p3)
- Need proper evidence to support proposals
- Ideas must be based on more than opinion
- Ideals must be practical in real situations
The Scholar-Practitioner Model
- Research should bridge theory and practice
- MSc projects need both academic underpinning AND practical output
- Balance of theory and practice in any chosen project
Project Specification Guide
Specification Headings (from Week 4 lectures)
1. Title
- Must encompass what you are doing (used by search engines, on report cover)
- Be exact but not too long
- Can include the research question or be phrased as a question
- Must include: topic(s) AND action words (investigate, compare, design, evaluate, identify, develop)
- Do NOT make it look like an essay title
Good title characteristics:
- States topics involved and outlines the task
- Includes a verb indicating process
- Scopes the work appropriately
Bad title examples:
- "ICT and Education" (no indication of what will be done)
- "Online system for resource allocation" (no process - design? evaluation? critique?)
- Too broad without sub-title scoping
Good title examples:
- "A critical analysis of action research as a method of undertaking information systems research"
- "Performance comparison of different image matching methods for an AR mobile application"
- "Assessing the effectiveness of Unsupervised Machine Learning to identify wireless network attacks by comparing Unsupervised Machine Learning and Supervised Machine Learning"
2. Problem/Opportunity to be Addressed
- Expansion/explanation of the title
- What issue/problem within current situation are you addressing?
- If you cannot answer "what is the problem?" - rethink the project
- A question gives direction for investigation
- Provide evidence to back up statements
3. Justification
- Does NOT require world-changing importance
- Acceptable justifications include:
- Study has not been done before (show evidence from literature)
- Previous work is outdated, things have changed
- Existing work could be applied to a different area
- Commentators suggest this is a fruitful area (with references)
- Personal development and career relevance
4. Aims
- General, overall expectations of what the project will produce
- High-level statement of intent and direction
- Examples:
- A designed/implemented artefact that is well evaluated
- A comparison between approaches with argument for "best"
- A set of guidelines
- Results that can be compared to existing work
5. Objectives
- Set of activities to achieve the research problem / answer the research question
- Clear steps (not always linear)
- Written as if giving instructions to someone else to follow
- Balance between too much and too little detail
- Test: Could someone else follow your instructions and complete the project?
Example objectives pattern:
- Identify current practice and trends via literature and surveys
- Elicit requirements from users using [method]
- Design and implement prototype to [purpose]
- Test prototype to evaluate whether requirements met
- Compare results to those in existing literature
- Report results
6. Resources Required
- Technology (available in labs? need personal devices?)
- Software (University access? Open source alternatives?)
- People (access feasible AND possible?)
- Financial costs (transport, materials)
- Data access considerations:
- WARNING: Sensitive data (security, crime) or valuable datasets may be problematic
- Do NOT assume access - find out
- Laboratory/infrastructure/analysis equipment - can you access it?
7. Timing / Work Plan
- Plan against available time: FT = 15 weeks, PT = 30 weeks
- MSc project = 600 hours
- Use WEEKS not specific dates
- Understand task ordering and dependencies
- Expect iteration and continuous tasks (esp. literature review)
- Include key markers/deliverables
- Consider contingency plans
- Use Gantt chart (NOT MS Project) - Excel-like format
- Include non-obvious tasks: interim reports, presentations, writing up, supervisor feedback, addressing feedback
- Do NOT use generic plans from Google - must be PROJECT-SPECIFIC
- Derive activities from your objectives and methodology section
Planning characteristics:
- Not "set in stone" - firm enough for direction, flexible for changes
- Don't fixate on detail too early
- Keep simple but well-thought-through
- Remember life continues around the project
8. References
- Harvard referencing system (UWS standard - Cite Them Right)
- All work properly referenced to avoid plagiarism
- Turnitin plagiarism checking will be applied
9. Literature Review
- Setting the scene, providing context
- NOT just finding, reading, and reporting articles
- Process: Find → Read → Assimilate → Consider → Offer critical description
- Structure into themes, debates, arguments
- "Build an argument, not a library"
- Should be approximately 20% of the specification
- Maximum 2000 words in Assessment 2
10. Primary Research (Methodology)
- Plans for action: YOUR personal input
- Overview of how you will generate data to answer research question
- Covers: who to talk to, why, how many, where, what questions, what tools
- What does your experiment comprise?
- Evaluation criteria and testing mechanisms
- Design principles and their justification
The "Joined-Up" Approach
The specification must show clear logical flow:
Title → Problem → Justification → Aims → Objectives → Literature Review → Methodology → Plan → Evaluation
Each section should reference and build upon the previous ones. The plan activities should derive from objectives and methodology. The evaluation should close the loop back to the research question.
Planning Tools and Techniques
Activity Diagrams
- Diagram-based on separate activities
- Arrows show dependencies
- Allows showing iteration and back-tracking
- Cluster tasks together showing how they feed into each other
- Can add references/sources to each activity
Mind Maps / Spray Diagrams
- Useful for early planning stages
- Encourage exploration and thinking through problems
- Dynamic - can add ideas over time
- Can be incorporated into project report
- Good for showing project overview at a glance
Gantt Charts
- Help once you know what needs to be done
- Good for showing activities against timeline
- Tend to encourage linear thinking (beware)
- Should show project-specific activities, NOT generic headings
The Non-Linear Nature of Projects
- Activities feed into each other non-linearly
- Literature review: starts at beginning, continues throughout
- Client/user interactions may open new areas of reading
- New publications may emerge during the project
- Plan must account for iteration and back-tracking
Template Sections (Assessment 2 template)
The official template includes marker comment boxes for:
- Title and Abstract (with word count) → /10
- Aims (up to 150 words) → }
- Objectives (up to 300 words) → } /20
- Justification (up to 150 words) → }
- Literature Review (max 2000 words, with word count) → /20
- Research Design/Methodology (up to 500 words) → /25
- Work Plan (insert as picture or appendix) → /10
- References → /10
- General English/Logic → /5
- Overall → /100
Literature Review Guide
Primary vs Secondary Research
- Primary data: That which you gather yourself (not in prior existence)
- Secondary data: That which already exists, usually in the literature
- Secondary data provides material for the literature review
- You must collect, evaluate critically, and discuss secondary data
Purpose of Literature Review
- Your work is only meaningful/significant in relation to existing body of knowledge
- Must consider how your work and results are similar/different to others' work
- Increases personal understanding of subject domain
- Provides academic underpinning for the project
- Helps structure the project by describing and analysing what others have said
- Starts wide/general then focuses on specifics
- Defines terms and concepts
- Allows you to build on others' work ("standing on the shoulders of giants" - Isaac Newton)
- Provides basis against which to compare your results
- Provides clues/guidance for primary research
What a Literature Review is NOT
- NOT a "dead list with annotated comments about texts" (Wisker, 2008)
- NOT summarising everything you have read
- NOT a list of sources with brief summaries (that's a precis)
- NOT a case of finding articles, reading them, and reporting them
- NOT something you do once at the start and then move on
- NOT an essay about your topic
What a Good Literature Review IS
- "An ongoing dialogue with the experts, theories and theorists underpinning your research" (Wisker, 2008)
- A critical analysis of what already exists
- YOUR treatment and analysis of the literature
- Organised into themes, debates, arguments, "schools of thought"
- YOUR "voice" coming through - you demonstrate control of the literature
- Evidence that you can build up and explain the direction you are taking
Structure and Organisation
How to Structure
- Use headings and sub-headings to structure and signpost direction
- Cluster parts under headings relating to different themes/issues found in literature
- Tell your own "story" - the version important to you and your project
- Do NOT provide a linear list explaining each text
- Do NOT provide a list of quotes with connecting sentences
Recommended Flow
- Part 1: What is the past? (Current state, history, existing approaches)
- Part 2: Where are you inclined to go from here? (Direction for your work)
Using Tables
- Very useful for summarising/comparing/contrasting others' work
- Hard work to construct from narrative but:
- Saves space
- Makes work "tighter" and more interesting
- Encourages you to consider/present work in your own way
Using Diagrams
- Welcome break in narrative
- Excellent for explaining complex relationships
- Can summarise processes/approaches effectively
Summaries
- Must summarise at appropriate intervals (end of sections, end of chapters)
- Tables are useful for providing summaries
- Use last sentence to point to direction of next section
- Do NOT leave the reader to draw their own conclusions
Finding Literature
Sources
- Journal articles (preferred - peer-reviewed, rigorous)
- Books (not always peer-reviewed)
- Conference papers
- Online databases
- SAGE Research Methods repository
- Web sources (use critically - anything can be published online)
Journal Quality Indicators
- Impact factor: More than 1 is good; no impact factor is suspicious
- Use author's name, institution, and journal name as quality indicators
- Use title, keywords, abstract as guides
- Skim read: abstract, introduction, conclusions first
Search Techniques
Backward Searching:
- Find one article → use its reference list for further articles
- All will be older than the original article
Forward Searching:
- Use citation indexes (ACM, Emerald, etc.)
- Track who has cited a specific paper
- Google Scholar makes this easy
- Online journals offer "related documents" features
Filtering Process
- Go throughout literature
- Read abstract and conclusion first
- If interesting, read the full paper
- You are responsible for filtering what is appropriate
- You may read much more than you eventually refer to - this is normal
Critical Evaluation
- Be critical in reading and assessing articles
- Journal articles usually safe due to refereeing process
- Web content needs extra scrutiny
- Do NOT manipulate the story to fit your purpose
- The reader must feel you offer a good, solid, thorough argument
- The literature cannot speak for you - YOU analyse and present
Dynamic Nature
- Literature review is time-sensitive (represents current situation at submission)
- May be months after initial literature search
- Keep your eye on literature for new publications
- Fast-moving technology makes this challenging
- Must be up-to-date
Practical Tips
- Run literature review through Turnitin to check for plagiarism "slips"
- Literature review is where plagiarism most commonly occurs
- Cut out material not directly relevant - even if it cost time to read
- Writing is a learning experience - not everything needs to go in the final report
- Use your plan to record sources: add authors, dates, quotes to each activity/bubble
- Treat each plan element in isolation to pull together ideas - this creates section drafts
Assessment Criteria (for Written Specification)
The literature review is worth 20% of Assessment 2 (20 marks):
| Level | Description |
|---|---|
| Excellent (14-20) | Well presented, critically evaluated, correctly referenced, clear underpinning of proposed study |
| Good (10-13) | Relevant literature, mostly correct refs, weak critical evaluation (descriptive), some link to study |
| Insufficient (6-9) | Some relevant discussion, weak/outdated/incorrect refs, descriptive not critical, "dead list" |
| Poor (0-5) | Little/no discussion, inappropriate/fabricated/URL-only refs |
Key Reading
- Oates (2006): Chapter 6 - Reviewing the Literature
- Wisker (2009): Chapter 7 - Carrying out a Literature Review
- Cornford and Smithson (2006): Chapter 6 - Using Research Literature
- Cresswell (2014): Chapter 2 - Review of the Literature
Qualitative Research Methods
Overview
Qualitative research uses a subjective worldview. Methods include phenomenology, ethnography, action research. Associated ideas: anti-positivism, constructivism, interpretivism. Seeks to understand meaning, context, and human experience rather than measure and predict.
Three Research Paradigms (Fossey et al.)
1. Empirico-Analytical (Positivist)
- Objective reality exists independently
- Quantitative methods predominate
- Seeks to explain, predict, control
2. Interpretive
- Reality is socially constructed
- Understanding through meaning and interpretation
- Qualitative methods predominate
3. Critical
- Reality shaped by social, political, cultural, economic factors
- Research as instrument of change
Action Research (AR)
Origins
- Kurt Lewin, MIT, USA (1944 onwards) - studied group dynamics and organisational development
- Tavistock Institute, London (1946 onwards) - group and organisational behaviour
Definition
"AR refers to the conjunction of three elements: research, action and participation. Unless all three elements are present, the process cannot be called AR." (Greenwood and Levin, 1998, p6)
Core Characteristics
- Participatory, cyclic approach
- Collaboration between insiders and outsiders to solve problems
- All participants experience the learning cycle:
- Having ideas
- Putting them into practice
- Learning about their value/usefulness
- Changing/adjusting them
- Reapplying them
- Reflecting back → having new ideas
AR Challenges to Positivist Empiricism
- Hypotheses and repeatability: Can any 2 human situations be the same?
- Measurement: Assumption that human action/thought can be measured quantifiably
- Observer/observed roles: Observer neutrality? One true reality?
- Role of history: Organisations are products of their past; each study is unique
- Value-neutrality: "Methods embody particular visions of the world" (Mumford & MacDonald, 1989)
- Language and models: Tendency to rely on models to describe what "is"
Susman and Evered's AR Cycle
Cyclic process: Diagnosing → Action Planning → Action Taking → Evaluating → Specifying Learning → (repeat)
Checkland's FMA Model of Research
A model to explain the process of action research:
- F = Framework of Ideas: Collection of ideas/beliefs the researcher begins with (changes over time through experience and reflection)
- M = Methodology: The way to apply the framework of ideas (e.g., Soft Systems Methodology)
- A = Area of Concern: The real-world situation/problem to which ideas are applied
AR Process (Checkland & Holwell, 1998):
- Enter the problem situation (A)
- Establish roles
- Declare M, F
- Take part in change process
- Exit
- Reflect on experience and record learning in relation to F, M, A
- Rethink 2, 3, 4
Reflection covers three areas:
- A: What was learned about the real-world situation? What changes? Would you do it the same?
- M: How well did the approach work? Could it be improved?
- F: Were you consistent with theoretical foundations? How have your ideas changed?
Difficulties of AR
- (Relatively) easy to do action alone (consultancy) or research alone (ivory tower)
- Hard to do both simultaneously
- Difficult to recognise, control, record, and use reflection
- Implementing theory is challenging - rich literature on what needs doing, less on how
- Writing up is difficult - how to describe an iterative process linearly?
- This is why few good practical AR stories exist in literature
Hermeneutics
- Theory and methodology seeking to explain interpretation and meaning
- Originally for study of biblical texts
- Offers useful way of considering the process of interpretation
- Important element in any qualitative research approach
Phenomenology
- Way of thinking about how we make sense of the world through personal experience
- Focuses on lived experience and subjective perception
- Contrasts with hermeneutics (interpretation of texts vs lived experience)
Ethnography
- Study of people and cultures through immersion
- Researcher becomes participant-observer
- Extended engagement in the field
- Listed as a qualitative methodology tool alongside phenomenology and action research
Appreciative Inquiry Method (AIM)
Dr Daune West's specialty - interpretive systems-based approach to knowledge elicitation.
Origin
- Developed in late 1980s/early 1990s to address problems in expert systems development
- Traditional Knowledge Engineering (KE) could elicit formal rules but not tacit knowledge
- AIM conceived to facilitate communication between investigator and knowledge expert
- Uses SSM (Soft Systems Methodology) models
Three Phases
Phase I: Map and Discussion
- Expert creates a "Systems Map" to structure the domain
- Area of focus identified and scoped
Phase II: CATWOE Questions, Root Definition, Conceptual Model
- Questions from SSM modelling structure the exploration
- Different parts of the map modelled systemically
- Root Definitions and Conceptual Models created
Phase III: Discussion and Agenda
- Using systemic model as agenda for further interview
- Models facilitate deeper discussion
Key Principles
- Expert offers personal description of expertise without researcher bias
- Subject/domain independent
- Encourages expert to form knowledge in their own language
- Not technology-driven or method-driven
- Models should reflect only the expert's understanding
- AIM is NOT the same as SSM (no intention to seek/enable change)
Published Studies
- West & Thomas (2005): Application in RCVS (voluntary services) - wireless technology
- West & Braganca (2012): Classical Dressage knowledge guardian study
Qualitative Data Analysis Techniques (ASCE paper - Spearing et al.)
1. Deductive Content Analysis
- Uses predefined coding scheme from existing theory/framework
- Quantifies what data communicates
- Useful for answering focused "how" questions
- Strength: Links to existing framework; allows quantification
- Weakness: May miss emergent ideas; may introduce framework bias
- Validated through intercoder reliability (Cohen's kappa, Krippendorff's alpha, Mezzich's kappa)
2. Hybrid Content Analysis (Deductive + Inductive)
- Uses deductive framework but also generates new/emergent themes
- Data-driven inductive coding alongside deductive codes
- Strength: Finds new insights within existing framework; allows quantification
- Weakness: Framework bias still possible; doesn't uncover relationships; more time-intensive
3. Constant Comparative Analysis (Grounded Theory)
- Systematic inductive approach
- Open coding → focused coding → core categories → axial coding
- Aims for theory development
- No predefined framework
- Strength: Bottom-up; discovers relationships between categories; can develop new theories
- Weakness: Not driven by quantification; more time-intensive; loses detail in collapsing process
When to Use Each
- Deductive: Goal is to quantify and compare with existing framework
- Hybrid: Reveal new insights within existing framework; quantify emergent themes
- Constant Comparative: Understand large-scale emergent challenges and relationships; develop new theories
Qualitative Quality Criteria (Fossey et al.)
Methodological Rigour
- Congruence - logical consistency
- Responsiveness - sensitivity to context
- Appropriateness - suitable methods
- Adequacy - sufficient depth/breadth
- Transparency - clear process documentation
Interpretive Rigour
- Authenticity - faithful to participants
- Coherence - logical data-to-conclusions narrative
- Reciprocity - mutual researcher-participant relationship
- Typicality - representativeness
- Permeability - researcher's open positioning
Mixed Methods (Borrego et al.)
Four Mixed-Methods Designs
- Triangulation: Qualitative and quantitative data collected simultaneously; results compared
- Embedded: One method embedded within a larger study using the other method
- Explanatory: Quantitative first, then qualitative to explain results
- Exploratory: Qualitative first, then quantitative to test/generalise findings
Key Principle
Choice of method should be driven by research question, not researcher preference or method popularity.
Quantitative & Mixed Methods
Quantitative Research Overview
Quantitative research adopts an objective worldview. It uses scientific, statistical, and logical tools. Associated ideas: positivism, realism, empiricism, nomothetic approaches, determinism.
Quantitative Methods (from Borrego et al.)
Descriptive Statistics
- Summarise and describe data characteristics
- Measures of central tendency (mean, median, mode)
- Measures of spread (standard deviation, variance, range)
- Frequency distributions
- Used to characterise the sample and present basic findings
Hypothesis Testing
- Formulation of null and alternative hypotheses
- Statistical tests to determine significance
- p-values and confidence intervals
- Types of errors (Type I, Type II)
Correlational Analysis
- Examining relationships between variables
- Correlation coefficients
- Does NOT imply causation
- Useful for identifying patterns and associations
Theory in Quantitative Research
- Often begins with existing theory
- Hypotheses derived from theory
- Data collected to test hypotheses
- Results used to support, refine, or reject theory
Quantitative Quality Criteria
| Criterion | Description |
|---|---|
| Validity | Does it measure what it claims to measure? |
| Generalizability | Can results apply to wider population? |
| Reliability | Would same results occur if repeated? |
| Objectivity | Is researcher bias minimised? |
Variables in Experimental Research
From the example specification (ML network attacks project):
- Independent variables: The manipulated/input variables (e.g., network data)
- Dependent variables: The measured outcome (e.g., accuracy of algorithm)
- Control variables: Kept constant (e.g., algorithm implementations themselves)
Data Types
- Ratio/Interval: Mathematical operations possible; continuous scales
- Nominal: Categorical labels (e.g., attack/legitimate, labelled data)
- Ordinal: Ranked categories
Mixed Methods Research (from Borrego et al.)
Definition
Research that combines qualitative and quantitative approaches in a single study to provide a more comprehensive understanding.
Four Mixed-Methods Designs
1. Triangulation Design
- Qualitative and quantitative data collected simultaneously
- Results compared and contrasted
- Purpose: Convergence, corroboration, cross-validation
- Equal priority to both methods
2. Embedded Design
- One method embedded within a larger study using the other method
- One method plays a supplemental role
- Purpose: Enhance or support the primary method
- Example: Qualitative interviews embedded within a large quantitative survey
3. Explanatory Design
- Quantitative first, then qualitative
- Qualitative phase explains or elaborates on quantitative results
- Purpose: Explain statistical results with participant perspectives
- Sequential: Phase 2 builds on Phase 1 findings
4. Exploratory Design
- Qualitative first, then quantitative
- Qualitative phase explores, then quantitative tests/generalises
- Purpose: Develop instruments, identify variables, generate hypotheses
- Sequential: Phase 2 tests Phase 1 findings at scale
When to Choose Mixed Methods
- When a single approach is insufficient to answer the research question
- When qualitative data can explain quantitative findings (or vice versa)
- When triangulation strengthens validity
- When research question has both exploratory and confirmatory components
Comparison of Evaluation Criteria Across Paradigms
| Quantitative | Qualitative | Purpose |
|---|---|---|
| Validity | Credibility | Does it measure/capture what it claims? |
| Generalizability | Transferability | Can findings apply elsewhere? |
| Reliability | Dependability | Are findings consistent? |
| Objectivity | Reflexivity | Is researcher influence accounted for? |
Statistics and Graphical Displays
From Week 1 lecture - a warning:
- Statistics and graphical displays can be manipulated
- Researchers must be honest in data presentation
- Critical evaluation of statistical claims is essential
- Understanding limitations of statistical methods is important
Key Principle
The choice of quantitative, qualitative, or mixed methods should be driven by the research question, not by researcher preference or method popularity. The method must "fit" the research proposal.
Academic Writing & Referencing
Harvard Referencing (UWS Standard)
- UWS uses Cite Them Right (CTR) Harvard style
- Reference guide: https://uws-uk.libguides.com/referencing
- All assessments must use Harvard referencing consistently
In-Text Citations
- Format: (Author, Year) or Author (Year)
- Multiple authors: (Author1, Author2 and Author3, Year)
- Direct quotes: (Author, Year, pXX)
Reference List
- Alphabetical by author surname
- All cited sources must appear in reference list
- All reference list entries must be cited in text
- Mismatch between citations and reference list is penalised
Common Reference Formats (from example spec)
Journal article:
Author, Initials. (Year) 'Title of article', Journal Name, Volume(Issue), pp. X-X. doi: XX
Book:
Author, Initials. (Year) Title of Book. Edition. Place: Publisher.
Website:
Title (Year/no date). Available at: URL (Accessed: Day Month Year).
What NOT to Do
- Do NOT use only URLs as references
- Do NOT fabricate/non-existing references
- Do NOT present references inconsistently
- Do NOT omit the reference list
- Do NOT have figures/diagrams without headings or source attribution
Writing Standards for RDM Assessments
General Principles
- Formal academic English
- Clear, well-structured arguments
- Logical flow between sections
- Professional presentation
- Avoid spelling mistakes and formatting issues
Module-Specific Lecturer Preferences (Dr Daune West)
- Do NOT use the word "leverage" - lecturer explicitly dislikes it
- Avoid "AI voice" in writing - must sound like genuine student work
- Generative AI: Type 1 Restricted only (spell check, grammar check)
- Do not write about the topic - write about the research DESIGN
- Build an argument, not a library - critical analysis, not description
- "Dead list" is unacceptable - do not summarise sources linearly
The "Blueprint" Test
The specification should be like an architect's drawing - detailed enough that someone else could pick it up and implement the project. This is the key quality criterion for the overall specification.
Continuity and Connectivity
- All sections must connect logically
- Title drives problem; problem drives aims; aims drive objectives; objectives drive methodology; methodology drives plan
- Cross-referencing between sections is expected
- Evaluation should "close the loop" back to the research question
Word Counts
Assessment 2 specifies word counts for several sections:
- Abstract: maximum 300 words (state word count)
- Literature Review: no more than 2000 words (state word count)
- Methodology: no more than 1000 words (state word count)
- Total specification: 2500-3500 words
Figures and Diagrams
- Must be labelled with captions
- Must be referenced if taken from another source
- Must be discussed in the text
- Very useful for breaking up narrative and explaining relationships
Plagiarism
- Turnitin used for plagiarism checking
- All work subject to checking at markers' discretion
- Plagiarism and falsification of results = serious academic misconduct
- Literature review is most common place plagiarism occurs
- Tip: Run lit review through Turnitin before submission
- Excessive use of quotations is not acceptable
Assessment Submission Requirements
Assessment 1 (Verbal Presentation)
- Format: .mp4 file
- Via Turnitin on Aula
- Within file size limits
- Clear audio
Assessment 2 (Written Specification)
- Format: .docx (NOT PDF) - to allow marker annotation
- Use the provided template
- Via Turnitin on Aula
- Include word counts where specified
- Student Declaration as first page
Key Textbooks for Writing Skills
- Oates, B.J. (2006) Researching Information Systems and Computing, Sage
- Wisker, G. (2008) The Postgraduate Research Handbook, Palgrave Macmillan
- Wisker, G. (2009) The Undergraduate Research Handbook, Palgrave Macmillan
- Cornford, T. and Smithson, S. (2006) Project Research in Information Systems, Palgrave Macmillan
- Cresswell (2014) - Chapter 2: Review of the Literature
- Thiel (2014) - Chapter 3: Developing a Research Plan
Example Specifications
Example 1: AR Mobile Application (PDF - "good specification")
Title: "Performance comparison of different image matching methods for an AR mobile application"
What Makes It Good
- Clear title stating topic AND process (comparison) AND context (AR mobile app)
- Well-structured: Abstract, Aims, Objectives, Justification, Literature Review, Methodology, Work Plan, References
- Demonstrates proper Gantt chart usage
- Shows how all sections connect logically
Example 2: ML Network Security (docx - "good project spec")
Title: "Assessing the effectiveness of Unsupervised Machine Learning to identify wireless network attacks by comparing Unsupervised Machine Learning and Supervised Machine Learning"
Structural Analysis
Abstract (good features):
- States the problem (zero-day attacks)
- Identifies the gap (unsupervised ML for anomaly detection)
- Outlines the approach (comparative study)
- Mentions data collection method (home network capture)
- States evaluation approach (accuracy comparison)
Aims (good features):
- Measure effectiveness of Unsupervised ML in detecting network attacks
- Build infrastructure for data collection
- Understand data preparation process
- Understand result interpretation and comparison
Objectives (good features):
- Specific evaluation metrics stated (Classification Accuracy, Confusion Matrix)
- Infrastructure details (specialised router, attack machine)
- Clear activities (run attacks, process data, compare results)
Literature Review (good features):
- Structured around questions (What is ML? What types? Which algorithms? How employed? Existing comparisons?)
- Author's "journey" through the literature is visible
- Narrows focus progressively (all ML → supervised/unsupervised → specific algorithms)
- Shows decision-making process (why K-Means over Neural Networks)
- Acknowledges limitations and gaps
- Uses comparison tables
Literature Review (lessons learned):
- Started too broad (all neural networks) and had to refocus
- Acknowledged reliance on websites over papers for concise information
- Identified gap: no studies comparing SVM with K-Means for network attacks
- This gap became foundation for further research
Methodology (good features):
- States research will replicate/extend previous work
- Declares objectivity: no bias toward either algorithm
- Identifies as experimental research
- Clear variable identification (independent, dependent, control)
- Data type classification (ratio/interval, nominal)
- Two-phase data collection plan
- Explicit "no ethical considerations" with justification
- Analysis approach stated (Classification Accuracy, Confusion Matrix)
References (good features):
- Harvard format consistently used
- Mix of journal articles, conference proceedings, and web sources
- Accessed dates included for web sources
Example 3: M-Learning Research Methods (Class Exercise)
Title: "The development and evaluation of a prototype m-learning multimedia package to support the teaching/learning of Research Methods"
Why This Title Works
- States topics involved (m-learning, multimedia, Research Methods)
- Outlines the task (development AND evaluation)
- "Prototype" is important - clearly not commercial standard, emphasises it's a test application
- Scopes expectations appropriately
Specification Development Process (from class notes)
This example demonstrates the questioning process for each section:
Problem: Research Methods is complex; students find it difficult; sources from many areas; practical skills hard to learn. Two sides: teaching AND learning = 2 different "customers".
Justification: Teaching needs practical examples; multimedia could help; m-learning is growing in practical application.
Aim: Explore to what extent teaching might be supported using m-learning platforms.
Objectives (developed through questions):
- What sort of multimedia would be useful? (research needed)
- What m-learning platforms are available? (comparison)
- How successful has m-learning been? (literature review)
- What constitutes learning/teaching? (theoretical models)
- What are issues surrounding teaching research methods? (context)
- How to evaluate the platform? (evaluation criteria)
Key Insight: To develop objectives, think about what you would NEED TO DO to answer these questions. The set of tasks = your objectives.
Resources: People, technology, knowledge, skills, costs
Literature Review Questions:
- What exists "out there" relevant to my project?
- Need to set the scene, avoid re-inventing the wheel
- Look at journals for recent research (not just books)
- Need to bring together different areas (m-learning + research methods)
Primary Research:
- Must produce something (prototype) - stated in specification
- Design using good design principles (not "out of my head")
- Evaluate: used by teachers AND learners; how many; how long; how to record use
- Evaluation criteria: found in existing literature?
- Must be rigorous, believable, justifiable, explicit, open to inspection
Example 4: Engineering Mortar Characterisation (Hughes version)
Title: "Optical Petrographic Characterisation of 16th-Century Mortars from Gylen Castle, Isle of Kerrera, Argyll and Bute, Scotland"
Why This Title Works
- States methodology (optical petrographic characterisation)
- States subject (16th-century mortars)
- States location (specific castle)
- BUT: offers no clue to potential results or reasons for research
Lessons
- Even a clear title can be improved by hinting at purpose
- Resources were clearly available (existing samples, microscopes)
- Time estimation was realistic (2 weeks for study)
- Literature review was feasible due to researcher's existing knowledge
- Included publication goal (conference paper submission)
Common Patterns in Successful Specifications
Title Formula
[Action verb] + [specific topic/technology] + [context/application] + [optional: method/approach]
Examples:
- "Assessing the effectiveness of [X] by comparing [X] and [Y]"
- "Performance comparison of [methods] for [application]"
- "Development and evaluation of [prototype] to support [purpose]"
The Questioning Process
For each specification section, ask:
- What do I need to know?
- Where will I find this information?
- What decisions do I need to make?
- What are the implications of each decision?
- How does this connect to the other sections?
Common Pitfalls to Avoid
- Topic vs research title - "ICT and Education" tells nothing about the research
- Input data vs output data - Describing datasets you USE, not data you GENERATE
- Generic plans - Using Google Images Gantt charts unrelated to YOUR project
- Dead list literature - Summarising papers one by one without critical analysis
- Disconnected sections - Aims that don't connect to methodology or plan
- Topic essay - Writing about the technology instead of the research design
- No evaluation criteria - Failing to explain how success will be measured
Dressage Simulator Exercise (Research Question Formulation)
The class exercise using a dressage simulator (racewood.com) demonstrates how to generate research questions from different angles:
- Software mimicry: How could software mimic a horse's unpredictability?
- Immersion: Could surround sound/vision increase immersion? How important is immersion for technology-based learning?
- Relationship simulation: Could software incorporate the horse reacting to rider's voice?
- Machine learning: Could software learn - start as untrained horse, rider trains it?
- Effectiveness evaluation: Are simulators actually useful for coaching?
- Cross-domain comparison: What other sports use simulators? What benefits?
Key lesson: A single topic can generate many different research questions depending on perspective (software engineer, sports scientist, game designer, educator).
MSc Project Topic Candidates
Context and Constraints
Stakeholders
- Professor Daune (West) — RDM module coordinator. Specialist in Appreciative Inquiry Method (AIM) and Soft Systems Methodology (SSM). Wants the project to incorporate AIM and ideally produce a GenAI application.
- Professor Jacob — MSc Project supervisor. Interested in deep analysis of the cc-mpc-extended-rlm project (extended RLM/MCP knowledge base agent, based on MIT's RLM research).
- Programme: MSc Artificial Intelligence at UWS — the topic must sit firmly within AI/ML.
Practical Boundaries
- Timeline: 15 weeks full-time (600 hours total)
- Existing asset: A working MCP-based knowledge base system (cc-mpc-extended-rlm) with self-learning protocol, already deployed and tested
- Methodology: Must be justified within recognised research paradigms (action research, mixed methods, etc.)
Student's Original Vision (Reference)
A multi-agent deliberation system where:
- A subject knowledge base is built (zero-confabulation, source-verified)
- User queries spawn SSM Root Definitions representing different perspectives
- Each perspective becomes an autonomous agent with its own knowledge base
- Agents engage in structured discussion until consensus or voting
- Meeting notes and alternative viewpoints are preserved
- Optional: real-time meeting assistant with voice-to-text integration
This vision is intellectually rich but exceeds MSc scope — it is better suited as a PhD programme or post-MSc research direction. The topics below extract feasible subsets of this vision.
---
Topic 1: Multi-Perspective Knowledge Elicitation Using AI-Augmented Appreciative Inquiry
Description: Design and evaluate a system where an LLM, backed by the extended-RLM knowledge base, guides users through AIM's three phases (Systems Map, CATWOE/Root Definitions, Conceptual Models). The AI generates candidate Root Definitions from a domain knowledge base and presents them for expert validation, reducing cognitive load on the human expert.
Justification: AIM was developed in the late 1980s to address limitations in expert systems; modern LLMs offer a fundamentally new capability for this process. No published work applies GenAI to AIM-structured knowledge elicitation.
| Dimension | Rating |
|---|---|
| Prof. Daune (AIM/SSM) | Strong |
| Prof. Jacob (RLM) | Medium |
| Novelty | High |
| Feasibility (15 weeks) | High |
| Implementation effort | Medium |
---
Topic 2: Evaluating Retrieval-Augmented Long-Term Memory for LLM Agents — An Extended RLM Architecture
Description: A formal, rigorous evaluation of the cc-mpc-extended-rlm system. Define metrics (retrieval accuracy, knowledge retention over time, token efficiency, hallucination reduction), benchmark against baseline approaches (vanilla RAG, fine-tuning, standard context stuffing), and publish findings. AIM is used as the methodology for eliciting expert evaluation criteria from domain specialists.
Justification: Memory architectures for LLM agents are a rapidly growing research area. The extended-RLM system is already implemented but lacks formal academic evaluation against established baselines.
| Dimension | Rating |
|---|---|
| Prof. Daune (AIM/SSM) | Medium |
| Prof. Jacob (RLM) | Strong |
| Novelty | Medium |
| Feasibility (15 weeks) | Very High |
| Implementation effort | Light |
---
Topic 3: Zero-Hallucination Knowledge Bases — A Self-Correcting Architecture for Domain-Specific LLM Applications
Description: Investigate and implement mechanisms for building knowledge bases that actively resist LLM confabulation. Extend the RLM system with source verification, confidence scoring, and automatic fact-checking against stored evidence. Evaluate hallucination rates with and without these mechanisms across multiple domains.
Justification: LLM hallucination is one of the most significant barriers to enterprise AI adoption. A system that demonstrably reduces hallucination through architectural design (rather than prompt engineering alone) addresses a critical gap. SSM-based analysis can frame what "truth" and "accuracy" mean across different stakeholder perspectives.
| Dimension | Rating |
|---|---|
| Prof. Daune (AIM/SSM) | Medium |
| Prof. Jacob (RLM) | Strong |
| Novelty | High |
| Feasibility (15 weeks) | High |
| Implementation effort | Medium |
---
Topic 4: AI-Facilitated Soft Systems Methodology — Automating Rich Picture and Root Definition Generation
Description: Build an application that takes a problem statement, uses an LLM with a domain knowledge base to generate candidate Rich Pictures (structured representations) and CATWOE-based Root Definitions, then facilitates iterative dialogue with the user to refine them. Evaluate whether AI-generated SSM artefacts are comparable in quality to those produced by trained SSM practitioners.
Justification: SSM is widely taught but rarely automated. No published work applies modern LLMs to SSM artefact generation. This would be a genuinely original contribution bridging AI and systems thinking.
| Dimension | Rating |
|---|---|
| Prof. Daune (AIM/SSM) | Strong |
| Prof. Jacob (RLM) | Medium |
| Novelty | Very High |
| Feasibility (15 weeks) | High |
| Implementation effort | Medium |
---
Topic 5: A Dual-Agent Deliberation Framework for Balanced Decision Support
Description: Implement a two-agent deliberation architecture: one agent argues for a proposition, one argues against, both drawing from a shared knowledge base. The system produces a structured report with arguments, counter-arguments, evidence, and a confidence-weighted conclusion. SSM Root Definitions frame the two perspectives.
Justification: This captures the core intellectual contribution of the student's grand vision (structured AI deliberation) in a feasible scope. Multi-agent debate systems are an emerging research area, but grounding them in SSM-defined perspectives is novel.
| Dimension | Rating |
|---|---|
| Prof. Daune (AIM/SSM) | Medium |
| Prof. Jacob (RLM) | Medium |
| Novelty | High |
| Feasibility (15 weeks) | Moderate–High |
| Implementation effort | Heavy |
---
Topic 6: Knowledge Base Evolution Tracking — Measuring How AI Agent Understanding Changes Over Time
Description: Using cc-mpc-extended-rlm as the experimental platform, study how a knowledge base evolves across sessions — what gets added, modified, or becomes obsolete. Develop metrics for "knowledge maturity" and "understanding depth." Apply longitudinally to a real domain (e.g., this MSc project workspace as the case study). Use AIM to structure the analysis of what constitutes meaningful knowledge evolution.
Justification: LLM memory is typically studied as a static retrieval problem. Studying the dynamics of knowledge accumulation and decay in an agent's persistent memory is a novel angle with implications for long-running AI assistants.
| Dimension | Rating |
|---|---|
| Prof. Daune (AIM/SSM) | Medium |
| Prof. Jacob (RLM) | Strong |
| Novelty | High |
| Feasibility (15 weeks) | High |
| Implementation effort | Light |
---
Topic 7: Structured Stakeholder Analysis Using GenAI — An SSM-Informed Approach to Requirements Elicitation
Description: Build a GenAI tool that, given a project or situation description, automatically identifies stakeholders, generates CATWOE analyses for each, produces Root Definitions, and highlights conflicts between stakeholder perspectives. The tool uses the RLM knowledge base architecture for domain grounding. Evaluate with real-world case studies.
Justification: Requirements elicitation is a well-established SE/IS research area. Adding GenAI to SSM-based stakeholder analysis is novel, practical, and directly useful. The tool has clear real-world applicability.
| Dimension | Rating |
|---|---|
| Prof. Daune (AIM/SSM) | Strong |
| Prof. Jacob (RLM) | Medium |
| Novelty | Medium–High |
| Feasibility (15 weeks) | High |
| Implementation effort | Medium |
---
Topic 8: Investigating the Effectiveness of Persistent Memory Architectures in Reducing LLM Context Degradation
Description: A controlled experimental study comparing LLM performance on complex, multi-session tasks with and without persistent memory (the RLM extension). Measure quality degradation curves, factual consistency, and task completion rates across conversation lengths. Mixed methods: quantitative metrics plus qualitative analysis using AIM to elicit user perceptions of AI reliability.
Justification: Context window limitations and quality degradation over long conversations are well-known LLM weaknesses. Empirical evidence of how persistent memory architectures mitigate this would be a valuable contribution to the field.
| Dimension | Rating |
|---|---|
| Prof. Daune (AIM/SSM) | Medium |
| Prof. Jacob (RLM) | Strong |
| Novelty | Medium |
| Feasibility (15 weeks) | Very High |
| Implementation effort | Light |
---
Topic 9: An Appreciative Inquiry-Based Framework for Collaborative Human-AI Knowledge Construction
Description: Design a framework where human domain experts and an AI agent collaboratively build a knowledge base using AIM's three phases as the structuring principle. The AI proposes, the human validates and enriches, and the system learns from corrections. Evaluate whether AIM-structured interaction produces higher-quality knowledge bases than unstructured interaction.
Justification: AIM was designed for human-to-human knowledge elicitation. Adapting it for human-to-AI collaboration is a natural and novel extension. No published work applies AIM to human-AI knowledge co-construction. This topic uniquely satisfies both supervisors at a fundamental level.
| Dimension | Rating |
|---|---|
| Prof. Daune (AIM/SSM) | Strong |
| Prof. Jacob (RLM) | Strong |
| Novelty | Very High |
| Feasibility (15 weeks) | High |
| Implementation effort | Medium |
---
Topic 10: Real-Time Meeting Intelligence — An AI Observer Using Domain Knowledge Bases for Contextual Analysis
Description: Build a system that processes meeting transcripts (text input), identifies key discussion points, maps them to a pre-built domain knowledge base, and generates contextual notes, related evidence, and suggested considerations. SSM-inspired perspective analysis flags viewpoints the participants may not have considered.
Justification: Meeting intelligence is a growing commercial sector, but existing tools focus on summarisation. Adding SSM-based perspective analysis and domain-grounded knowledge injection is a novel differentiation. Scoped to text input (not live audio) for feasibility.
| Dimension | Rating |
|---|---|
| Prof. Daune (AIM/SSM) | Medium |
| Prof. Jacob (RLM) | Medium |
| Novelty | Medium–High |
| Feasibility (15 weeks) | Moderate |
| Implementation effort | Heavy |
---
Comparative Summary
| # | Topic | Daune | Jacob | Novelty | Feasibility | Effort |
|---|---|---|---|---|---|---|
| 1 | AI-Augmented AIM | Strong | Medium | High | High | Medium |
| 2 | RLM Evaluation | Medium | Strong | Medium | Very High | Light |
| 3 | Zero-Hallucination KB | Medium | Strong | High | High | Medium |
| 4 | Automated SSM | Strong | Medium | Very High | High | Medium |
| 5 | Dual-Agent Deliberation | Medium | Medium | High | Moderate–High | Heavy |
| 6 | KB Evolution Tracking | Medium | Strong | High | High | Light |
| 7 | Stakeholder Analysis Tool | Strong | Medium | Medium–High | High | Medium |
| 8 | Context Degradation Study | Medium | Strong | Medium | Very High | Light |
| 9 | AIM for Human-AI KB | Strong | Strong | Very High | High | Medium |
| 10 | Meeting Intelligence | Medium | Medium | Medium–High | Moderate | Heavy |
Top 3 Recommendations
- Topic 9 (AIM + Human-AI Knowledge Construction) — strongest overlap between both supervisors' interests, genuinely novel, and the existing cc-mpc-extended-rlm system serves as the foundation.
- Topic 4 (Automated SSM with LLMs) — nobody has done this with modern LLMs; a genuinely original contribution that Professor Daune would likely champion.
- Topic 2 (Rigorous RLM Evaluation) — the safest choice; the artefact already exists, the work is evaluation, and a high-quality dissertation is achievable with confidence in 15 weeks.
---
Generated: 2026-03-08. Based on RDM module materials, cc-mpc-extended-rlm project context, and supervisor requirements.