본문으로 바로가기

 

Chapter 1: Fundamentals of Testing (테스팅의 기초)

1. Objectives of Testing (테스팅의 목적)
Testing is more than just finding defects. Its primary goals are to:

  • Provide stakeholders with information to make informed decisions. (이해관계자에게 의사결정을 위한 정보 제공)
  • Find defects and failures to enhance quality. (품질 향상을 위해 결함과 실패 발견)
  • Verify that specified requirements have been fulfilled. (명시된 요구사항 충족 검증)
  • Build confidence in the quality level of the test object. (품질 수준에 대한 신뢰 구축)

2. (★★★★★) Causality Chain: Error → Defect → Failure (인과관계)
This is the most fundamental concept.

  • Error (Mistake): A human action that produces an incorrect result. It's the action of making a mistake.
    • Example: A developer misunderstands a requirement and writes a > b instead of a >= b.
  • Defect (Bug, Fault): An imperfection or deficiency in a work product where it does not meet its requirements or specifications. It's the result of the error, sitting in the code or document.
    • Example: The line of code that contains a > b is the defect.
  • Failure: An event in which a component or system does not perform a required function within specified limits. It's the observable manifestation of a defect when the code is executed.
    • Example: When a user with age a = 20 tries to get a discount, the system denies it because of the a > b defect, causing an incorrect outcome. This is a failure.

3. (★★★★) The 7 Testing Principles (7가지 테스팅 원리)

  1. Testing shows presence of defects, not absence: You can only prove that bugs exist, not that they don't.
  2. Exhaustive testing is impossible: Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases.
  3. Early testing saves time and money: Finding and fixing defects early in the lifecycle is much cheaper.
  4. Defects cluster together: A small number of modules usually contain most of the defects discovered.
  5. Beware of the pesticide paradox: If the same tests are repeated over and over again, they eventually no longer find any new defects. Test cases need to be regularly reviewed and revised.
  6. Testing is context-dependent: The way you test an e-commerce site is different from how you test safety-critical aviation software.
  7. Absence-of-errors fallacy: If the system built is unusable and does not fulfill the users’ needs and expectations, then finding and fixing defects does not help.

Chapter 2: Testing Throughout the Software Development Lifecycle (생명주기)

1. (★★★★) Test Levels (테스트 레벨)
Testing is not a single activity. It's performed at different levels.

  • Component Testing: Focuses on individual software components (modules, units). It is typically done by developers in isolation from the rest of the system.
  • Integration Testing: Focuses on the interaction and interfaces between integrated components.
  • System Testing: Focuses on the behavior of the whole, integrated system. It evaluates both functional and non-functional requirements.
  • Acceptance Testing: Focuses on verifying the fitness for use of the system from the user's or customer's perspective.
    • User Acceptance Testing (UAT): Verifies if the system is acceptable to the end-users.
    • Alpha Testing: Performed at the developing organization's site.
    • Beta Testing: Performed by customers or potential customers at their own locations.

2. (★★★★★) Test Types (테스트 유형)

  • Functional Testing: Tests what the system does. Based on requirements and specifications.
  • Non-functional Testing: Tests how well the system works. This includes:
    • Performance Testing: Measures response times, throughput, etc.
    • Load Testing: Tests the system under normal and peak load conditions.
    • Stress Testing: Tests the system beyond its limits to see how it fails.
    • Usability Testing: Checks how easy the system is to use.
    • Security Testing: Checks for vulnerabilities to threats.
  • Structural Testing (White-box Testing): Based on the internal structure of the system (e.g., code). Measured by coverage.
  • Change-related Testing: Performed after a change.
    • Confirmation Testing (Re-testing): Verifies that a previously reported defect has been fixed.
    • Regression Testing: Verifies that a change (like a bug fix or new feature) has not introduced any adverse side-effects in existing, unchanged functionality.

Chapter 3: Static Testing (정적 테스팅)

Static testing analyzes work products without executing them. It includes reviews and static analysis.

1. (★★★★) Review Types (리뷰 유형) - In order of formality

  • Informal Review: No formal process. Can be a simple pair review. Main purpose is to be cheap and fast.
  • Walkthrough: The author of the work product leads the session, explaining it to peers and gathering feedback. Good for learning and knowledge sharing.
  • Technical Review: A discussion meeting led by a trained moderator, focused on achieving consensus on the technical content.
  • Inspection: The most formal review type. Led by a trained facilitator/moderator, it follows a strict process with defined roles. The main purpose is to find defects, and metrics are collected.

2. Roles in a Formal Review (공식 리뷰의 역할)

  • Author: Creates the work product being reviewed.
  • Facilitator (Moderator): Leads the review, ensures the process is followed.
  • Reviewer: Checks the work product for potential defects.
  • Scribe (Recorder): Documents all issues and decisions made during the meeting.

Chapter 4: Test Design Techniques (테스트 설계 기법)

1. (★★★★★) Black-box Techniques (블랙박스 기법) - "Specification-based"

  • Equivalence Partitioning (EP): Divides data into partitions (classes) where all elements are expected to be processed similarly. Test one value from each partition.
    • Example (Age 18-65): Invalid partition (< 18), Valid partition (18-65), Invalid partition (> 65). Test cases: 17, 30, 66.
  • Boundary Value Analysis (BVA): Focuses on the boundaries (edges) of partitions, as this is where errors often occur.
    • Example (Age 18-65): Test values 17, 18, 19 and 64, 65, 66 (3-value BVA).
  • Decision Table Testing: Used for complex business rules. It maps conditions to actions, creating test cases for each rule (column).
  • State Transition Testing: Used for systems that have different states. It models the states, the events that trigger transitions, and the actions that result. Aims to cover all valid transitions.

2. (★★★★) White-box Techniques (화이트박스 기법) - "Structure-based"
The goal is to measure and achieve a certain level of code coverage.

  • Statement Coverage: Measures the percentage of executable statements that have been exercised.
    • Coverage = (Number of executed statements / Total number of statements) * 100%
  • Decision Coverage (Branch Coverage): Measures the percentage of decision outcomes (e.g., the True and False branches of an IF statement) that have been exercised.
  • Coverage Strength: 100% Decision Coverage always guarantees 100% Statement Coverage. The reverse is not true.

3. (★★★) Experience-based Techniques (경험 기반 기법)

  • Error Guessing: The tester uses their experience and intuition to anticipate where defects might occur.
  • Exploratory Testing: Test design and execution occur at the same time. The tester is "exploring" the software. It is often structured into time-boxed sessions with a specific charter (mission).
  • Checklist-based Testing: Testers use a high-level list of items to check or conditions to verify.

Chapter 5: Managing the Test Activities (시험 활동 관리)

1. Test Planning & Estimation (계획 및 추정)

  • Test Plan: The master document for testing. Contains scope, objectives, approach, entry/exit criteria, schedule, resources, and risks.
  • Entry Criteria: Conditions to be met before testing can start (e.g., "test environment is ready").
  • Exit Criteria: Conditions to be met before testing can be considered complete (e.g., "95% of test cases executed").
  • Estimation Techniques:
    • Metrics-based: Uses data from past projects.
    • Expert-based: Uses the intuition of experts.
      • (★★★) Three-Point Estimation (PERT): E = (Optimistic + 4*Most Likely + Pessimistic) / 6. A higher Standard Deviation ((P - O) / 6) means higher uncertainty.

2. Test Monitoring and Control (모니터링 및 제어)

  • Monitoring: The activity of comparing actual progress against the plan. Uses metrics like "test cases executed %" or "requirements coverage %".
  • Control: The activity of taking action when monitoring shows a deviation from the plan (e.g., re-prioritizing tests, adding resources).

3. (★★★★) Risk Management (리스크 관리)

  • Project Risk: Risks to the project's success (e.g., lack of staff, tight schedule, budget cuts).
  • Product Risk (Quality Risk): Risks of failure in the product itself (e.g., incorrect calculations, poor performance, security vulnerabilities).
  • Risk-based Testing: Uses the identified product risks to prioritize and direct testing efforts.

4. (★★★★★) Defect Management (결함 관리)

  • Defect Report: A document reporting any flaw in a component or system that can cause it to fail to perform its required function. Key contents include ID, title, steps to reproduce, actual vs. expected results, severity, and priority.
  • Severity vs. Priority:
    • Severity: The degree of technical impact the defect has on the system. (Assigned by tester).
    • Priority: The level of business urgency for fixing the defect. (Assigned by project manager/product owner).
    • Classic Example: A spelling mistake of the company name on the homepage is Low Severity but High Priority.

5. Configuration Management (형상 관리)

  • The process of establishing and maintaining the integrity of all test-related items (testware) and the test object. It ensures you know exactly which version of the code was tested with which version of the test cases in which environment.

Chapter 6: Tool Support for Testing (테스트 도구)

1. Test Tool Classification (도구 분류)

  • Test Management Tools (e.g., Jira, Quality Center): Manage test cases, requirements, defects, and report on progress.
  • Static Analysis Tools (e.g., SonarQube): Analyze code for defects without executing it.
  • Test Execution Tools (e.g., Selenium, Appium): Run automated test scripts.
  • Performance Testing Tools (e.g., JMeter, LoadRunner): Measure system performance under load.

2. Benefits and Risks of Test Automation (자동화의 이점과 리스크)

  • Benefits: Fast, reliable, repeatable. Good for regression testing. Frees up testers for more creative tasks like exploratory testing.
  • Risks:
    • (★★★) Unrealistic expectations: The belief that automation will solve all problems is the biggest risk.
    • High initial investment and maintenance cost.
    • Automated scripts are also software and can have defects.
    • Frequent changes in the system under test can make script maintenance very difficult.

 

K1 정복 - '실제 보기 문장' 암기 리스트

⭐ 챕터 1: Fundamentals of Testing

  • Testing Definition: 
    • Testing is the process of evaluating a work product to find defects. 
    • (테스팅은 결함을 찾기 위해 작업 산출물을 평가하는 프로세스이다.)
  • Testing Objective: 
    • A common objective of testing is building confidence in the quality of the test object. 
    • (테스팅의 일반적인 목표는 테스트 대상의 품질에 대한 신뢰를 구축하는 것이다.)
  • 7 Principles (Exhaustive): 
    • Exhaustive testing (testing everything) is impractical except for trivial cases. 
    • (전수 테스팅은 사소한 경우를 제외하고는 비현실적이다.)
  • 7 Principles (Clustering): 
    • A small number of modules usually contains most of the defects discovered. 
    • (소수의 모듈이 보통 발견된 결함의 대부분을 포함한다.)
  • 7 Principles (Pesticide Paradox): 
    • If the same tests are repeated over and over again, eventually they will no longer find new defects. 
    • (만약 같은 테스트가 계속 반복된다면, 결국 더 이상 새로운 결함을 찾지 못할 것이다.)
  • Test Process Activities: 
    • The main activities of the test process include planning, analysis, design, implementation, execution, and completion.
    • (테스트 프로세스의 주요 활동은 계획, 분석, 설계, 구현, 실행, 완료를 포함한다.)
  • Test Basis: 
    • The test basis is the body of knowledge used as the basis for test analysis and design. 
    • (테스트 베이시스는 테스트 분석과 설계의 기반으로 사용되는 지식 체계이다.)
  • Test Object: 
    • The test object is the component or system to be tested.
    • (테스트 대상은 테스트될 컴포넌트나 시스템이다.)

⭐ 챕터 2: Testing Throughout the SDLC

  • V-model: 
    • In the V-model, testing is integrated throughout the lifecycle, with test levels corresponding to development phases. 
    • (V-모델에서, 테스팅은 생명주기 전반에 통합되며, 테스트 레벨은 개발 단계에 상응한다.)
  • Shift Left: 
    • The "shift left" approach involves starting test activities as early as possible. 
    • ("Shift Left" 접근법은 테스트 활동을 가능한 한 일찍 시작하는 것을 포함한다.)
  • Component Testing: 
    • Component testing is often performed by the developer who wrote the code. 
    • (컴포넌트 테스팅은 종종 코드를 작성한 개발자에 의해 수행된다.)
  • Acceptance Testing (Alpha/Beta): 
    • Alpha testing is performed at the developing organization's site, while Beta testing is performed by customers at their own sites. 
    • (알파 테스팅은 개발 조직의 장소에서, 베타 테스팅은 고객의 장소에서 수행된다.)
  • Test Types: 
    • Functional testing evaluates what the system does, while non-functional testing evaluates how well it does it. 
    • (기능 테스팅은 시스템이 '무엇을' 하는지, 비기능 테스팅은 '얼마나 잘' 하는지를 평가한다.)
  • Maintenance Testing: 
    • Maintenance testing is triggered by modifications, migrations, or retirement of a system. 
    • (유지보수 테스팅은 시스템의 수정, 이전, 또는 폐기에 의해 촉발된다.)

⭐ 챕터 3: Static Testing

  • Static vs. Dynamic: 
    • Static testing does not execute the code, whereas dynamic testing does. 
    • (정적 테스팅은 코드를 실행하지 않지만, 동적 테스팅은 실행한다.)
  • Review Process:
    • The phases of a formal review are: Planning, Kick-off, Individual Preparation, Review Meeting, and Follow-up. 
    • (공식 리뷰의 단계는: 계획, 킥오프, 개별 준비, 리뷰 미팅, 후속 조치이다.)
  • Review Roles (Scribe): 
    • The scribe (or recorder) documents all issues and decisions made during the review meeting. 
    • (서기(또는 기록자)는 리뷰 미팅 동안 제기된 모든 이슈와 결정을 문서화한다.)

⭐ 챕터 4: Test Techniques

  • Categories: 
    • Test techniques can be categorized as black-box, white-box, and experience-based. 
    • (테스트 기법은 블랙박스, 화이트박스, 경험 기반으로 분류될 수 있다.)
  • Use Case Testing: 
    • Use Case testing helps to identify test cases that exercise the whole system on a transaction-by-transaction basis from start to finish. 
    • (유스케이스 테스팅은 시스템 전체를 시작부터 끝까지 트랜잭션 단위로 테스트하는 테스트 케이스를 식별하는 데 도움이 된다.)
  • Error Guessing: 
    • Error guessing is a technique that relies on the tester's experience to anticipate the occurrence of mistakes. 
    • (오류 추정은 실수의 발생을 예측하기 위해 테스터의 경험에 의존하는 기법이다.)

⭐ 챕터 5: Test Management

  • Test Planning: 
    • Test planning involves defining the objectives of testing and the approach for meeting them. 
    • (테스트 계획은 테스팅의 목표와 그 목표를 달성하기 위한 접근법을 정의하는 것을 포함한다.)
  • Test Monitoring vs. Control: 
    • Test monitoring is about collecting data, while test control involves taking action based on that data. 
    • (테스트 모니터링은 데이터 수집에 관한 것이고, 테스트 제어는 그 데이터에 기반하여 조치를 취하는 것이다.)
  • Risk: 
    • Risk can be defined by its likelihood of occurrence and its impact. 
    • (리스크는 발생 가능성과 영향도로 정의될 수 있다.)
  • Defect Report: 
    • A defect report should contain enough information to allow the developer to reproduce the defect. 
    • (결함 리포트는 개발자가 결함을 재현할 수 있도록 충분한 정보를 포함해야 한다.)

K2 정복 - '개념 비교/이해' 표

비교 대상 A 비교 대상 B 핵심 차이점 및 설명 (Key Difference & Explanation)
Testing
(테스팅)
Debugging
(디버깅)
Goal: 
테스팅은 결함의 존재를 
드러내는 것(show presence).
디버깅은 원인을 찾아 
수정하는 것(fix). 테스팅은 디버깅을 유발할 수 있음.
Quality Assurance (QA) Quality Control (QC) Focus: 
QA는 결함을 예방(prevent)하는 
프로세스(process) 중심.
QC는 결함을 탐지(detect)하는 
제품(product) 중심. QA는 QC를 포함하는 더 넓은 개념.
Verification
(검증)
Validation
(확인)
Question: 
검증은 
"Are we building the product right?" (제품을 올바르게 만들고 있는가? - 명세서 기준).
확인은 
"Are we building the right product?" (올바른 제품을 만들고 있는가? - 사용자 요구 기준).
Confirmation Testing Regression Testing Purpose: 
확인은 
"수정된 결함이 고쳐졌는지(fixed?)" 확인.
회귀는 
"수정 후 다른 곳에 부작용은 없는지(side-effects?)" 확인.
보통 확인 테스트 후 회귀 테스트를 수행함.
Priority
우선순위)
Severity
(심각도)
Perspective: 
Priority는 비즈니스 영향(business impact)에 따른 처리 순서.
Severity는 기술적 영향(technical impact)에 따른 심각한 정도.
둘은 독립적일 수 있음.
Test Basis
(테스트 베이시스)
Test Oracle
(테스트 오라클)
Role: 
Test Basis는 
'무엇을' 테스트할지 알려주는 근거(e.g., 요구사항).
Test Oracle은 '예상 결과'가 무엇인지 알려주는 출처(e.g., 기존 시스템).
Statement Coverage Decision Coverage Strength & Relationship: 
Decision Coverage가 더 강력함.
100% Decision Coverage는 100% Statement Coverage를 
보장하지만, 그 반대는 아님.
Entry Criteria
(진입 기준)
Exit Criteria
(종료 기준)
Timing & Purpose: 
Entry는 언제 시작할지(start?)"에 대한 조건(e.g., 테스트 환경 준비 완료).
Exit는 "언제 중단할지(stop?)"에 대한 조건(e.g., 커버리지 95% 달성).
Black-box Testing White-box Testing Viewpoint: 
블랙박스는 시스템의 
외부(external) 동작을 명세서 기반으로 테스트.
화이트박스는 시스템의 
내부(internal) 구조를 코드 기반으로 테스트.
Functional Testing Non-functional Testing "What" vs. "How well": 
기능 테스팅은 시스템이 
무엇을 하는지(what it does) 테스트.
비기능 테스팅은 시스템이 
얼마나 잘 하는지(how well it performs) 테스트 (e.g., 성능, 보안, 사용성).
Test Plan Test Strategy Scope: 
테스트 계획은 
특정 프로젝트에 대한 'What, When, Who, How'를 다룸.
테스트 전략은 
조직 전체에 적용되는 상위 레벨의 테스트 접근법을 다룸.
(FL에서는 Test Plan의 일부로 Test Approach가 나옴)
반응형

'CS > 시험 공부' 카테고리의 다른 글

AWS 시험 대비 정리  (2) 2025.07.16
ISTQB 시험 팁  (0) 2025.06.17
Error / Defect / Failure  (0) 2025.06.16
ISTQB 4.0 Foundation Level - mock test2  (0) 2025.06.16
ISTQB 4.0 Foundation Level [1장][2장] - 전체 요약 정리  (1) 2025.06.15