Conversation
📝 WalkthroughWalkthroughAdds CASL-based authorization for InstructorPost with INSTRUCTOR/ASSISTANT/STUDENT/PARENT contexts, replaces parameter-based filtering with abilityFilter, centralizes attachment upload/presigned URL resolution in FileStorageService, and updates routes, repos, services, tests, and mocks accordingly. Changes
Sequence DiagramsequenceDiagram
participant Client
participant Route as Route Handler
participant Service as InstructorPostsService
participant CASL as defineInstructorPostAbility
participant FileStorage as FileStorageService
participant Repo as InstructorPostsRepository
participant DB as Database
Client->>Route: POST /instructor-posts (payload + files)
Route->>Service: createPost(payload, userType, profileId)
Service->>CASL: getAbility(userType, profileId)
CASL-->>Service: AppAbility
Service->>Service: ability.can(Create, InstructorPost)?
alt denied
Service-->>Client: ForbiddenError
else allowed
Service->>FileStorage: uploadAttachments(files)
FileStorage->>DB: store files (S3)
FileStorage-->>Service: [{filename, fileUrl}, ...]
Service->>Repo: create(post with attachments)
Repo->>DB: INSERT post
DB-->>Repo: created post
Service->>FileStorage: resolvePresignedUrls(post.attachments)
FileStorage-->>Service: attachments with presigned URLs
Service-->>Client: 201 {post with presigned URLs}
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
src/services/student-posts.service.ts (2)
381-383:⚠️ Potential issue | 🟠 MajorKeep
updatePoston the same presigned-URL contract as the other read paths.Line 382 returns the raw repository payload, and the normal return path also skips
resolvePresignedUrls(...). After an edit, this method can return a different attachment shape than create/list/detail, with raw storage URLs instead of signed ones.Also applies to: 405-409
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/student-posts.service.ts` around lines 381 - 383, The early-return in updatePost (when isRedundant && (!files || files.length === 0)) returns the raw repository payload and skips resolvePresignedUrls, causing inconsistent attachment shapes; change that branch to run the same presigned-URL normalization as the normal return path by invoking resolvePresignedUrls(post) (or the same helper used elsewhere) and returning its result instead of raw post; make the same change for the other early-return branch around the 405-409 logic so all updatePost return paths produce the same signed-URL attachment shape.
389-396:⚠️ Potential issue | 🔴 CriticalDelete only attachments that are actually being removed, and only after the update succeeds.
This block removes every previous direct attachment as soon as any new file is uploaded. If the request keeps some existing files, those rows will now point at deleted objects; and if the DB write fails afterward, the old blobs are already gone.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/student-posts.service.ts` around lines 389 - 396, The current logic in the uploadedAttachments branch deletes all previous attachments immediately after uploads, which can remove files that the updated post still references and can orphan deletions if the DB update fails; change this by computing which old attachments are actually being removed (compare oldAttachments' fileUrl against the new attachments list that will be persisted), perform the DB update first (the method that applies the new attachments to the post), and only after the update succeeds call fileStorageService.delete for those removed attachments (filter to a.fileUrl truthy and use fileStorageService.delete on that subset). Ensure you reference post.attachments/oldAttachments, uploadedAttachments, the DB update call, and fileStorageService.delete in your changes.src/services/instructor-posts.service.ts (3)
189-203:⚠️ Potential issue | 🔴 CriticalVerify selected targets belong to the acting instructor before saving them.
Create only checks that the enrollment IDs exist, and update doesn't even do that. A crafted request can still target another instructor's enrollments here, or hit a raw FK error on update instead of a controlled 4xx.
Also applies to: 530-536
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/instructor-posts.service.ts` around lines 189 - 203, When handling PostScope.SELECTED in the create and update flows (the block using data.scope, PostScope.SELECTED, and data.targetEnrollmentIds and the similar logic around lines ~530-536), verify ownership by fetching enrollments filtered by both the provided IDs and the acting instructor's id instead of calling enrollmentsRepository.findByIds alone; e.g., use or add a repository method like findByIdsAndInstructorId(enrollmentIds, instructorId) or pass a where clause to include instructorId, then compare returned IDs to requested targetEnrollmentIds and throw a NotFoundException listing the missing/unauthorized IDs if they differ; apply the same check in the update path to prevent targeting other instructors' enrollments.
629-635:⚠️ Potential issue | 🟠 MajorDelete direct post files when deleting the post.
This only removes comment attachments. Any direct
post.attachmentswith afileUrlare left behind in S3 after the row is deleted.Proposed fix
+ await Promise.all( + post.attachments + .map((attachment) => attachment.fileUrl) + .filter((fileUrl): fileUrl is string => !!fileUrl) + .map((fileUrl) => this.fileStorageService.delete(fileUrl)), + ); + // 댓글 첨부파일 삭제 await this.commentsService.deleteCommentAttachmentsByPostId( postId, 'instructorPost', );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/instructor-posts.service.ts` around lines 629 - 635, The current delete flow only calls commentsService.deleteCommentAttachmentsByPostId and then instructorPostsRepository.delete(postId), leaving any direct post.attachments (with fileUrl) orphaned in S3; update the delete routine in instructor-posts.service.ts to first load the post (including post.attachments), iterate over attachments that have a fileUrl and remove them from storage (via the existing storage/attachments helper or a new call like this.attachmentsService.deleteAttachmentFile or this.storageService.deleteFile), then delete attachment records if needed, call this.commentsService.deleteCommentAttachmentsByPostId(postId, 'instructorPost') and finally remove the DB row via this.instructorPostsRepository.delete(postId); ensure you handle errors from storage deletions (log/propagate) so DB delete runs only after cleanup or is transactionally consistent as appropriate.
593-603:⚠️ Potential issue | 🟠 MajorDon't collapse explicit empty arrays into “no change”.
Here
[]means “clear the relation”, but|| undefinedandallAttachments.length ? ... : undefinedturn it into “leave existing rows alone”. Removing the last direct attachment/material, or switching a post away fromSELECTED, won't actually persist the clear.Proposed fix
+ const nextTargetEnrollmentIds = + data.targetEnrollmentIds !== undefined + ? data.targetEnrollmentIds + : data.scope && data.scope !== PostScope.SELECTED + ? [] + : undefined; + const updatedPost = await this.instructorPostsRepository.update(postId, { title: data.title, content: data.content, isImportant: data.isImportant, scope: data.scope, targetRole: data.targetRole, lectureId: data.lectureId, - materialIds: data.materialIds || undefined, - attachments: allAttachments.length ? allAttachments : undefined, - targetEnrollmentIds: data.targetEnrollmentIds || undefined, + materialIds: + data.materialIds !== undefined ? data.materialIds : undefined, + attachments: + data.attachments !== undefined || uploadedAttachments.length > 0 + ? allAttachments + : undefined, + targetEnrollmentIds: nextTargetEnrollmentIds, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/instructor-posts.service.ts` around lines 593 - 603, The update call is collapsing explicit empty arrays into undefined (so [] meant to clear relations becomes “no change”) — stop using `|| undefined` and the `allAttachments.length ? ... : undefined` pattern in the `instructorPostsRepository.update` payload; instead construct the update payload so keys are only omitted when the source is literally undefined and pass empty arrays as-is to clear relations. Concretely, in the code around `updatedPost = await this.instructorPostsRepository.update(...)` use conditional property spreads (e.g. ...(data.materialIds !== undefined ? { materialIds: data.materialIds } : {}), ...(data.targetEnrollmentIds !== undefined ? { targetEnrollmentIds: data.targetEnrollmentIds } : {}), and always set `attachments: allAttachments` only if `allAttachments !== undefined` vs omitting the key when undefined) or otherwise include the fields verbatim when they are `[]` so that `materialIds`, `attachments` (from `allAttachments`) and `targetEnrollmentIds` clear relations when empty arrays are provided.src/services/filestorage.service.ts (1)
144-147:⚠️ Potential issue | 🟠 MajorCloudFront downloads still build the disposition header incorrectly.
The S3 fallback now sends both
filenameandfilename*, but the CloudFront path double-encodesfileNameinsidefilename=and drops the UTF-8 fallback entirely. With CloudFront enabled, Korean and non-ASCII filenames will download with literal percent-encoding (e.g.,%ED%95%9C) or appear garbled.Proposed fix
const bucketName = getBucketName(bucketType); const encodedFileName = encodeURIComponent(fileName); + const safeFileName = fileName.replace(/["\\\r\n]/g, '_'); + const contentDisposition = + `attachment; filename="${safeFileName}"; filename*=UTF-8''${encodedFileName}`; // CloudFront Signed URL 생성 (CloudFront가 설정된 경우) const cloudFrontUrl = getCloudFrontUrl(bucketType); @@ try { return getCloudFrontSignedUrl({ - url: `https://${cloudFrontUrl}/${key}?response-content-disposition=${encodeURIComponent(`attachment; filename="${encodedFileName}"`)}`, + url: `https://${cloudFrontUrl}/${key}?response-content-disposition=${encodeURIComponent(contentDisposition)}`, keyPairId, privateKey, dateLessThan: new Date(Date.now() + expiresIn * 1000), @@ const command = new GetObjectCommand({ Bucket: bucketName, Key: key, - ResponseContentDisposition: `attachment; filename="${fileName}"; filename*=UTF-8''${encodedFileName}`, + ResponseContentDisposition: contentDisposition, });Also applies to: line 160
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/filestorage.service.ts` around lines 144 - 147, The CloudFront branch currently double-encodes the filename (using encodeURIComponent around an already percent-encoded encodedFileName) and omits the UTF-8 fallback; change the Content-Disposition construction used in the getCloudFrontSignedUrl call to match the S3 fallback by building a single contentDisposition string that contains both an ASCII-safe filename fallback (e.g., filename="asciiSafeName") and the UTF-8 RFC2231 form (filename*=UTF-8'' + encodedFileName), then pass encodeURIComponent(contentDisposition) into the URL once (so encodedFileName is only percent-encoded for filename*), and apply the same fix to the other CloudFront occurrence referenced around the later call (both spots reference getCloudFrontSignedUrl / encodedFileName).
🧹 Nitpick comments (1)
src/test/mocks/services.mock.ts (1)
56-83: Make presigned URLs visibly different in this mock.
getPresignedUrl,getDownloadPresignedUrl, andresolvePresignedUrlsall echo the stored URL, so tests cannot tell whether a caller actually resolved a presigned URL or just returned the raw storage key.🔎 Small mock tweak
- getPresignedUrl: jest.fn(async (url: string) => url), - getDownloadPresignedUrl: jest.fn(async (url: string) => url), + getPresignedUrl: jest.fn(async (url: string) => `${url}?signed=1`), + getDownloadPresignedUrl: jest.fn( + async (url: string) => `${url}?download=1`, + ), ... - fileUrl: a.fileUrl || a.material?.fileUrl || null, + fileUrl: a.fileUrl + ? `${a.fileUrl}?signed=1` + : a.material?.fileUrl + ? `${a.material.fileUrl}?signed=1` + : null,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/test/mocks/services.mock.ts` around lines 56 - 83, The mock currently returns the raw storage key for getPresignedUrl, getDownloadPresignedUrl, and resolvePresignedUrls making it impossible to tell if resolution occurred; update the mocks (getPresignedUrl, getDownloadPresignedUrl, and resolvePresignedUrls in services.mock.ts) to return a clearly different string (e.g., prefix or transform the input like `https://presigned.mock/{original}`) so callers receiving a presigned URL can be distinguished from raw keys, and ensure resolvePresignedUrls maps each attachment and material.fileUrl to that transformed presigned format when present.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/services/comments.service.ts`:
- Around line 452-453: The newly uploaded attachments (uploadedAttachments) are
not cleaned up if a subsequent repository write (e.g., inside createComment or
updateComment flows) fails; wrap the repository write/DB operations that consume
uploadedAttachments in a try/catch, and in the catch call
this.fileStorageService.cleanupUploadedAttachments(uploadedAttachments) (or the
appropriate cleanup method) before rethrowing the original error; apply the same
pattern for the other occurrence that uses uploadedAttachments around lines
594-595 so any orphaned files are removed on failure.
In `@src/services/instructor-posts.service.test.ts`:
- Around line 1415-1417: The test currently uses expect.objectContaining({})
which always matches and doesn't verify CASL-derived target-role filtering;
update the assertions on instructorPostsRepo.findMany (and the similar
assertions at the other two places) to assert the actual abilityFilter argument
or the resulting visibility: capture the actual call argument for
instructorPostsRepo.findMany and assert it contains the expected abilityFilter
shape or properties (e.g., include the CASL-generated conditions) or assert the
returned posts reflect the expected visibility for the target role; specifically
reference instructorPostsRepo.findMany and the abilityFilter variable/parameter
in your assertion so the test fails if the CASL filter is not passed through.
In `@src/services/instructor-posts.service.ts`:
- Around line 153-156: Replace direct throws of CASL's ForbiddenError with your
app's ForbiddenException by wrapping the
ForbiddenError.from(...).throwUnlessCan(...) call in a try-catch: call
ForbiddenError.from(ability).throwUnlessCan(A.Create, subject('InstructorPost',
newPostTemp as unknown as InstructorPost)) inside try, catch the CASL
ForbiddenError and rethrow a ForbiddenException (preserving the original
message), and repeat the same pattern for the three other occurrences that use
ForbiddenError.from(...).throwUnlessCan (the blocks around lines that reference
ability/throwUnlessCan for A.Create and other actions). Ensure you import and
use ForbiddenException from your HTTP exceptions and do not change the
permission logic.
In `@src/services/materials.service.ts`:
- Around line 473-504: The parent-access loop currently calls
materialsRepository.isAccessibleByStudent which doesn't consider
material.targetRole, allowing STUDENT-only materials to be visible to parents;
fix by adding target-role-aware validation: either (preferred) add a new
repository method materialsRepository.isAccessibleByParent (or extend
isAccessibleByStudent with a targetRole param) that filters the query for
targetRole IN (TargetRole.ALL, TargetRole.PARENT), then call that method in the
parent branch (replace isAccessibleByStudent call inside the enrollment loop)
after validateParentLectureAccess; ensure the new method signature and its query
explicitly check targetRole and preserve existing scope/enrollment checks so
hasAccess only becomes true when the material targets PARENT or ALL.
---
Outside diff comments:
In `@src/services/filestorage.service.ts`:
- Around line 144-147: The CloudFront branch currently double-encodes the
filename (using encodeURIComponent around an already percent-encoded
encodedFileName) and omits the UTF-8 fallback; change the Content-Disposition
construction used in the getCloudFrontSignedUrl call to match the S3 fallback by
building a single contentDisposition string that contains both an ASCII-safe
filename fallback (e.g., filename="asciiSafeName") and the UTF-8 RFC2231 form
(filename*=UTF-8'' + encodedFileName), then pass
encodeURIComponent(contentDisposition) into the URL once (so encodedFileName is
only percent-encoded for filename*), and apply the same fix to the other
CloudFront occurrence referenced around the later call (both spots reference
getCloudFrontSignedUrl / encodedFileName).
In `@src/services/instructor-posts.service.ts`:
- Around line 189-203: When handling PostScope.SELECTED in the create and update
flows (the block using data.scope, PostScope.SELECTED, and
data.targetEnrollmentIds and the similar logic around lines ~530-536), verify
ownership by fetching enrollments filtered by both the provided IDs and the
acting instructor's id instead of calling enrollmentsRepository.findByIds alone;
e.g., use or add a repository method like
findByIdsAndInstructorId(enrollmentIds, instructorId) or pass a where clause to
include instructorId, then compare returned IDs to requested targetEnrollmentIds
and throw a NotFoundException listing the missing/unauthorized IDs if they
differ; apply the same check in the update path to prevent targeting other
instructors' enrollments.
- Around line 629-635: The current delete flow only calls
commentsService.deleteCommentAttachmentsByPostId and then
instructorPostsRepository.delete(postId), leaving any direct post.attachments
(with fileUrl) orphaned in S3; update the delete routine in
instructor-posts.service.ts to first load the post (including post.attachments),
iterate over attachments that have a fileUrl and remove them from storage (via
the existing storage/attachments helper or a new call like
this.attachmentsService.deleteAttachmentFile or this.storageService.deleteFile),
then delete attachment records if needed, call
this.commentsService.deleteCommentAttachmentsByPostId(postId, 'instructorPost')
and finally remove the DB row via this.instructorPostsRepository.delete(postId);
ensure you handle errors from storage deletions (log/propagate) so DB delete
runs only after cleanup or is transactionally consistent as appropriate.
- Around line 593-603: The update call is collapsing explicit empty arrays into
undefined (so [] meant to clear relations becomes “no change”) — stop using `||
undefined` and the `allAttachments.length ? ... : undefined` pattern in the
`instructorPostsRepository.update` payload; instead construct the update payload
so keys are only omitted when the source is literally undefined and pass empty
arrays as-is to clear relations. Concretely, in the code around `updatedPost =
await this.instructorPostsRepository.update(...)` use conditional property
spreads (e.g. ...(data.materialIds !== undefined ? { materialIds:
data.materialIds } : {}), ...(data.targetEnrollmentIds !== undefined ? {
targetEnrollmentIds: data.targetEnrollmentIds } : {}), and always set
`attachments: allAttachments` only if `allAttachments !== undefined` vs omitting
the key when undefined) or otherwise include the fields verbatim when they are
`[]` so that `materialIds`, `attachments` (from `allAttachments`) and
`targetEnrollmentIds` clear relations when empty arrays are provided.
In `@src/services/student-posts.service.ts`:
- Around line 381-383: The early-return in updatePost (when isRedundant &&
(!files || files.length === 0)) returns the raw repository payload and skips
resolvePresignedUrls, causing inconsistent attachment shapes; change that branch
to run the same presigned-URL normalization as the normal return path by
invoking resolvePresignedUrls(post) (or the same helper used elsewhere) and
returning its result instead of raw post; make the same change for the other
early-return branch around the 405-409 logic so all updatePost return paths
produce the same signed-URL attachment shape.
- Around line 389-396: The current logic in the uploadedAttachments branch
deletes all previous attachments immediately after uploads, which can remove
files that the updated post still references and can orphan deletions if the DB
update fails; change this by computing which old attachments are actually being
removed (compare oldAttachments' fileUrl against the new attachments list that
will be persisted), perform the DB update first (the method that applies the new
attachments to the post), and only after the update succeeds call
fileStorageService.delete for those removed attachments (filter to a.fileUrl
truthy and use fileStorageService.delete on that subset). Ensure you reference
post.attachments/oldAttachments, uploadedAttachments, the DB update call, and
fileStorageService.delete in your changes.
---
Nitpick comments:
In `@src/test/mocks/services.mock.ts`:
- Around line 56-83: The mock currently returns the raw storage key for
getPresignedUrl, getDownloadPresignedUrl, and resolvePresignedUrls making it
impossible to tell if resolution occurred; update the mocks (getPresignedUrl,
getDownloadPresignedUrl, and resolvePresignedUrls in services.mock.ts) to return
a clearly different string (e.g., prefix or transform the input like
`https://presigned.mock/{original}`) so callers receiving a presigned URL can be
distinguished from raw keys, and ensure resolvePresignedUrls maps each
attachment and material.fileUrl to that transformed presigned format when
present.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 671fd6f4-a408-4167-ab09-41d3b953ceb5
📒 Files selected for processing (13)
src/casl/instructor-post.ability.test.tssrc/casl/instructor-post.ability.tssrc/casl/subjects.tssrc/repos/instructor-posts.repo.tssrc/routes/mgmt/v1/instructor-posts.route.tssrc/services/comments.service.tssrc/services/filestorage.service.tssrc/services/instructor-posts.service.test.tssrc/services/instructor-posts.service.tssrc/services/materials.service.tssrc/services/student-posts.service.test.tssrc/services/student-posts.service.tssrc/test/mocks/services.mock.ts
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
src/services/comments.service.ts (1)
588-614:⚠️ Potential issue | 🟠 MajorDelete the replaced direct files only after the update succeeds.
This pre-delete runs before the new upload and DB update. If either later step fails, the comment still points at the old attachments but the S3 objects are already gone.
Proposed fix
- if (files?.length) { - const existingDirectAttachments = - await this.commentsRepository.findDirectAttachmentsByCommentId( - commentId, - ); - for (const attachment of existingDirectAttachments) { - if (attachment.fileUrl) { - await this.fileStorageService.delete(attachment.fileUrl); - } - } - } + const existingDirectAttachments = files?.length + ? await this.commentsRepository.findDirectAttachmentsByCommentId( + commentId, + ) + : []; @@ const updatedComment = await this.commentsRepository.update(commentId, { content: data.content, materialIds: data.materialIds, attachments: allAttachments.length ? allAttachments : undefined, }); + await this.fileStorageService.cleanup(existingDirectAttachments); return { ...this.addIsMineField(updatedComment, userType, profileId), attachments: await this.fileStorageService.resolvePresignedUrls( updatedComment.attachments, ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/comments.service.ts` around lines 588 - 614, The current flow deletes existing direct attachments from S3 before uploading new files and updating the DB, which can orphan DB references if upload/update fails; change the sequence in updateComment logic so that you first call fileStorageService.uploadAttachments(files), perform commentsRepository.update(commentId, {...}) to persist the new attachments (use materialAttachments and uploadedAttachments to build allAttachments), and only after the update succeeds iterate existingDirectAttachments and call fileStorageService.delete(fileUrl) to remove replaced S3 objects; reference the variables/functions existingDirectAttachments, fileStorageService.uploadAttachments, commentsRepository.update, uploadedAttachments, allAttachments, and fileStorageService.delete when making this change.src/repos/materials.repo.ts (2)
171-191:⚠️ Potential issue | 🔴 CriticalDon't collapse hidden attachments into the library fallback.
attachments.length === 0is now evaluated after thetargetRolefilter. A material attached only toSTUDENTposts will therefore look like an unattached library material toisAccessibleByParent(), and vice versa. Count attachments without the role filter first, then returnfalsewhen attachments exist but none are visible to the requested role.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/repos/materials.repo.ts` around lines 171 - 191, The current logic in isAccessibleByParent() queries attachments using client.instructorPostAttachment.findMany with the targetRole filter and treats attachments.length === 0 as a library fallback, which collapses hidden attachments into the fallback; change it to first query/count attachments for the given materialId without the instructorPost.targetRole filter to determine if any attachments exist at all, then perform the filtered query (or reuse the existing filtered result called attachments) to see which are visible to the requested role; return true only when no attachments exist overall, return false when attachments exist but none are visible to the role, and otherwise continue as before.
193-202:⚠️ Potential issue | 🔴 Critical
GLOBALandLECTUREstill ignore the supplied enrollment.In this predicate,
enrollmentIdonly affectsSELECTED. For any role-matchingGLOBALorLECTUREattachment, the method returnstruewithout proving that the enrollment actually belongs to that post's audience. That can over-grant access for library materials; this branch still needs an instructor/lecture membership check before it grants access.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/repos/materials.repo.ts` around lines 193 - 202, The predicate in attachments.some(...) incorrectly grants access for post.scope === 'LECTURE' or 'GLOBAL' without validating the supplied enrollmentId; change the logic so each branch (LECTURE, GLOBAL, SELECTED) verifies that the given enrollmentId is actually included in the post's audience rather than returning true unconditionally. Concretely, in the attachments.some(att => { const post = att.instructorPost; ... }) replace the early true for LECTURE/GLOBAL with a membership check that uses post.targets (e.g., post.targets.includes(enrollmentId) or an existing helper like isEnrollmentInPostTargets(post, enrollmentId)) and keep the SELECTED branch validating post.targets.length > 0 && post.targets.includes(enrollmentId) so only attachments that actually target the enrollment grant access.src/services/filestorage.service.ts (1)
137-148:⚠️ Potential issue | 🟠 MajorDon't double-encode the CloudFront download filename.
encodedFileNameis already percent-escaped, and then the wholeContent-Dispositionvalue is encoded again for the query string. With Korean filenames, the browser ends up seeing%ED...literally instead of the real name. Mirror the S3 branch here: build the header from the raw filename/filename*, then encode the final query parameter once.Proposed fix
const bucketName = getBucketName(bucketType); const encodedFileName = encodeURIComponent(fileName); + const contentDisposition = + `attachment; filename="${fileName}"; filename*=UTF-8''${encodedFileName}`; // CloudFront Signed URL 생성 (CloudFront가 설정된 경우) const cloudFrontUrl = getCloudFrontUrl(bucketType); @@ if (cloudFrontUrl && keyPairId && privateKey) { try { return getCloudFrontSignedUrl({ - url: `https://${cloudFrontUrl}/${key}?response-content-disposition=${encodeURIComponent(`attachment; filename="${encodedFileName}"`)}`, + url: `https://${cloudFrontUrl}/${key}?response-content-disposition=${encodeURIComponent(contentDisposition)}`, keyPairId, privateKey, dateLessThan: new Date(Date.now() + expiresIn * 1000), });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/filestorage.service.ts` around lines 137 - 148, The code double-encodes the filename by pre-encoding encodedFileName and then encoding the Content-Disposition again for the CloudFront URL; change the logic in the CloudFront branch (around getCloudFrontSignedUrl, getCloudFrontUrl, loadPrivateKey usage) to build the Content-Disposition header from the raw fileName (include both simple filename and RFC5987 filename* form like the S3 branch), then call encodeURIComponent once on the entire header value when constructing the query string; remove the initial encodeURIComponent(fileName) assignment (encodedFileName) and ensure the final URL uses the single-encoded header so multibyte filenames (e.g. Korean) appear correctly.src/services/instructor-posts.service.ts (1)
217-240:⚠️ Potential issue | 🟠 MajorFailed writes still leak uploaded attachments.
Both flows upload direct files before the repository write, but if
create()orupdate()rejects there is no compensating cleanup foruploadedAttachments. That leaves orphaned storage objects on failed writes. Please wrap the persistence step intry/catchand delete only the newly uploaded files on failure.Also applies to: 604-624
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/instructor-posts.service.ts` around lines 217 - 240, The code calls fileStorageService.uploadAttachments before persisting but doesn't clean up on persistence failure, so wrap the repository calls (e.g., instructorPostsRepository.create and the similar update flow around lines 604-624) in try/catch: after uploading via fileStorageService.uploadAttachments, perform the create/update inside try, and in the catch call fileStorageService.deleteAttachments (or the corresponding delete method) with uploadedAttachments (only the newly uploaded ones) before rethrowing the error; ensure attachments passed to the repository remain unchanged on success and that deletion is only attempted when uploadedAttachments is non-empty.
♻️ Duplicate comments (2)
src/services/instructor-posts.service.test.ts (2)
185-216:⚠️ Potential issue | 🟠 MajorKeep the auth-denial assertions specific.
InstructorPostsServicenow throwsForbiddenExceptionfor these denial paths, sotoThrow(Error)will also pass on unrelated failures and stops verifying the 403 contract. The same relaxation shows up in the other authorization denials later in this file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/instructor-posts.service.test.ts` around lines 185 - 216, The tests currently assert generic errors (toThrow(Error)) for auth-denial paths (e.g., getPostTargets and createPost) which masks contract verification; update those assertions to expect the specific ForbiddenException thrown by InstructorPostsService (replace rejects.toThrow(Error) with rejects.toThrow(ForbiddenException)) and ensure ForbiddenException is imported from `@nestjs/common`; apply the same change to other authorization-denial tests in this file so each denial path explicitly asserts ForbiddenException.
1425-1435:⚠️ Potential issue | 🟡 MinorThese target-role filter assertions are still too permissive.
They only prove that one
ORbranch contains the expectedtargetRole. If another scope branch drops that condition, the test still passes and the role-filter regression comes back. Assert the fullabilityFilter.ORset forGLOBAL/LECTURE/SELECTED, or verify visibility with mixed-role fixtures instead.Also applies to: 1467-1477
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/instructor-posts.service.test.ts` around lines 1425 - 1435, The test's assertion on instructorPostsRepo.findMany is too permissive because it only checks that one OR branch includes the expected targetRole; update the test to assert the complete abilityFilter.OR array for each visibility case (GLOBAL, LECTURE, SELECTED) rather than using expect.arrayContaining for a single branch—specifically assert the exact set/order (or canonicalized set) of objects under abilityFilter.OR that include targetRole values for TargetRole.ALL/TargetRole.STUDENT as applicable for GLOBAL/LECTURE/SELECTED, or alternatively replace this assertion with a verification using mixed-role fixtures that exercise visibility end-to-end so the role-filter cannot be accidentally dropped.
🧹 Nitpick comments (1)
src/services/instructor-posts.service.ts (1)
355-357: PassA.Listexplicitly when building the list filter.This method constructs a list query, but
accessibleBy(ability)defaults to thereadaction. The ability rules explicitly define separateListactions for most user types (Assistant, Student, Parent), and relying on the defaultreadaction creates a mismatch that will apply wrong filters if these rules diverge.♻️ Proposed fix
- const abilityFilter = accessibleBy(ability).InstructorPost; + const abilityFilter = accessibleBy(ability, A.List).InstructorPost;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/instructor-posts.service.ts` around lines 355 - 357, The accessibleBy call is defaulting to the "read" action while your ability rules define a separate List action; update the filter construction by passing the List action explicitly (e.g., change accessibleBy(ability).InstructorPost to accessibleBy(ability, A.List).InstructorPost) and ensure A (the actions enum) is imported where defineInstructorPostAbility and accessibleBy are used so the list query uses the correct A.List rules.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/services/comments.service.ts`:
- Around line 455-503: The catch currently cleans up uploadedAttachments for any
error including failures from fileStorageService.resolvePresignedUrls, which
deletes files even when DB writes succeeded; fix by narrowing the try to only
cover repository writes (the calls to
this.commentsRepository.createCommentWithStudentPostStatusUpdate and
this.commentsRepository.create) and move resolvePresignedUrls calls
(fileStorageService.resolvePresignedUrls) outside the try so presigned-URL
failures do not trigger cleanup of uploaded files, and keep
cleanup(uploadedAttachments) only in the catch that handles repository write
failures; apply the same change to the equivalent block that uses
resolvePresignedUrls at the other location that references the same methods.
In `@src/services/filestorage.service.ts`:
- Around line 222-230: The uploadAttachments method must guard against partial
successes: wrap the loop that calls uploadAttachment(file) in a try/catch inside
uploadAttachments, and on any error call the existing cleanup(results) (or
implement a cleanup function that deletes already-uploaded fileUrls/filenames)
to remove partial uploads before rethrowing the error; ensure you still return
results on success and rethrow the original error after cleanup on failure,
referencing uploadAttachments, uploadAttachment, and cleanup to locate the
changes.
In `@src/services/instructor-posts.service.test.ts`:
- Around line 103-105: The test mock for
permissionService.getEffectiveInstructorId returns the caller id by default and
thus bypasses the TA-specific authorAssistantId check; update the
mockImplementation in the failing tests (and the other blocks around the same
file) so it returns post.instructorId when appropriate — e.g.,
permissionService.getEffectiveInstructorId.mockImplementation(async (type, id)
=> type === 'instructor' ? post.instructorId : id) — ensuring
post.authorAssistantId branches still exercise the CASL guard for assistant
updates/deletes; apply the same change to the other test ranges that exercise
TA-specific behavior.
In `@src/test/mocks/services.mock.ts`:
- Around line 55-85: The FileStorageService mock is missing the cleanup method
causing failures when production code calls fileStorageService.cleanup(); update
the mock object (the one cast to jest.Mocked<FileStorageService>) to include
cleanup: jest.fn(async () => { /* no-op */ }) so the mock surface matches the
real interface; ensure this is added alongside upload, getPresignedUrl,
uploadAttachment, uploadAttachments, resolvePresignedUrls, etc., so tests
exercising failure paths won't throw "cleanup is not a function".
---
Outside diff comments:
In `@src/repos/materials.repo.ts`:
- Around line 171-191: The current logic in isAccessibleByParent() queries
attachments using client.instructorPostAttachment.findMany with the targetRole
filter and treats attachments.length === 0 as a library fallback, which
collapses hidden attachments into the fallback; change it to first query/count
attachments for the given materialId without the instructorPost.targetRole
filter to determine if any attachments exist at all, then perform the filtered
query (or reuse the existing filtered result called attachments) to see which
are visible to the requested role; return true only when no attachments exist
overall, return false when attachments exist but none are visible to the role,
and otherwise continue as before.
- Around line 193-202: The predicate in attachments.some(...) incorrectly grants
access for post.scope === 'LECTURE' or 'GLOBAL' without validating the supplied
enrollmentId; change the logic so each branch (LECTURE, GLOBAL, SELECTED)
verifies that the given enrollmentId is actually included in the post's audience
rather than returning true unconditionally. Concretely, in the
attachments.some(att => { const post = att.instructorPost; ... }) replace the
early true for LECTURE/GLOBAL with a membership check that uses post.targets
(e.g., post.targets.includes(enrollmentId) or an existing helper like
isEnrollmentInPostTargets(post, enrollmentId)) and keep the SELECTED branch
validating post.targets.length > 0 && post.targets.includes(enrollmentId) so
only attachments that actually target the enrollment grant access.
In `@src/services/comments.service.ts`:
- Around line 588-614: The current flow deletes existing direct attachments from
S3 before uploading new files and updating the DB, which can orphan DB
references if upload/update fails; change the sequence in updateComment logic so
that you first call fileStorageService.uploadAttachments(files), perform
commentsRepository.update(commentId, {...}) to persist the new attachments (use
materialAttachments and uploadedAttachments to build allAttachments), and only
after the update succeeds iterate existingDirectAttachments and call
fileStorageService.delete(fileUrl) to remove replaced S3 objects; reference the
variables/functions existingDirectAttachments,
fileStorageService.uploadAttachments, commentsRepository.update,
uploadedAttachments, allAttachments, and fileStorageService.delete when making
this change.
In `@src/services/filestorage.service.ts`:
- Around line 137-148: The code double-encodes the filename by pre-encoding
encodedFileName and then encoding the Content-Disposition again for the
CloudFront URL; change the logic in the CloudFront branch (around
getCloudFrontSignedUrl, getCloudFrontUrl, loadPrivateKey usage) to build the
Content-Disposition header from the raw fileName (include both simple filename
and RFC5987 filename* form like the S3 branch), then call encodeURIComponent
once on the entire header value when constructing the query string; remove the
initial encodeURIComponent(fileName) assignment (encodedFileName) and ensure the
final URL uses the single-encoded header so multibyte filenames (e.g. Korean)
appear correctly.
In `@src/services/instructor-posts.service.ts`:
- Around line 217-240: The code calls fileStorageService.uploadAttachments
before persisting but doesn't clean up on persistence failure, so wrap the
repository calls (e.g., instructorPostsRepository.create and the similar update
flow around lines 604-624) in try/catch: after uploading via
fileStorageService.uploadAttachments, perform the create/update inside try, and
in the catch call fileStorageService.deleteAttachments (or the corresponding
delete method) with uploadedAttachments (only the newly uploaded ones) before
rethrowing the error; ensure attachments passed to the repository remain
unchanged on success and that deletion is only attempted when
uploadedAttachments is non-empty.
---
Duplicate comments:
In `@src/services/instructor-posts.service.test.ts`:
- Around line 185-216: The tests currently assert generic errors
(toThrow(Error)) for auth-denial paths (e.g., getPostTargets and createPost)
which masks contract verification; update those assertions to expect the
specific ForbiddenException thrown by InstructorPostsService (replace
rejects.toThrow(Error) with rejects.toThrow(ForbiddenException)) and ensure
ForbiddenException is imported from `@nestjs/common`; apply the same change to
other authorization-denial tests in this file so each denial path explicitly
asserts ForbiddenException.
- Around line 1425-1435: The test's assertion on instructorPostsRepo.findMany is
too permissive because it only checks that one OR branch includes the expected
targetRole; update the test to assert the complete abilityFilter.OR array for
each visibility case (GLOBAL, LECTURE, SELECTED) rather than using
expect.arrayContaining for a single branch—specifically assert the exact
set/order (or canonicalized set) of objects under abilityFilter.OR that include
targetRole values for TargetRole.ALL/TargetRole.STUDENT as applicable for
GLOBAL/LECTURE/SELECTED, or alternatively replace this assertion with a
verification using mixed-role fixtures that exercise visibility end-to-end so
the role-filter cannot be accidentally dropped.
---
Nitpick comments:
In `@src/services/instructor-posts.service.ts`:
- Around line 355-357: The accessibleBy call is defaulting to the "read" action
while your ability rules define a separate List action; update the filter
construction by passing the List action explicitly (e.g., change
accessibleBy(ability).InstructorPost to accessibleBy(ability,
A.List).InstructorPost) and ensure A (the actions enum) is imported where
defineInstructorPostAbility and accessibleBy are used so the list query uses the
correct A.List rules.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 90daf671-f43a-4688-bed8-87caef4bbcfc
📒 Files selected for processing (10)
src/repos/materials.repo.tssrc/routes/svc/v1/instructor-posts.route.tssrc/services/comments.service.tssrc/services/filestorage.service.tssrc/services/instructor-posts.service.test.tssrc/services/instructor-posts.service.tssrc/services/materials.service.test.tssrc/services/materials.service.tssrc/test/mocks/repo.mock.tssrc/test/mocks/services.mock.ts
| try { | ||
| // 기존 데이터의 attachments와 업로드된 attachments 결합 | ||
| const allAttachments = [ | ||
| ...(data.attachments || []), | ||
| ...uploadedAttachments, | ||
| ]; | ||
|
|
||
| // 강사/조교가 학생 질문에 댓글 작성 시 트랜잭션으로 처리 | ||
| if ( | ||
| studentPostForStatus && | ||
| studentPostForStatus.status === StudentPostStatus.BEFORE | ||
| ) { | ||
| const comment = | ||
| await this.commentsRepository.createCommentWithStudentPostStatusUpdate( | ||
| { | ||
| content: data.content, | ||
| studentPostId: studentPostForStatus.id, | ||
| instructorId: writerInfo.instructorId, | ||
| assistantId: writerInfo.assistantId, | ||
| enrollmentId: writerInfo.enrollmentId, | ||
| authorRole: writerInfo.authorRole, | ||
| materialIds: data.materialIds, | ||
| attachments: allAttachments.length ? allAttachments : undefined, | ||
| }, | ||
| ); | ||
| return { | ||
| ...this.addIsMineField(comment, userType, profileId), | ||
| attachments: await this.fileStorageService.resolvePresignedUrls( | ||
| comment.attachments, | ||
| ), | ||
| }; | ||
| } | ||
| } | ||
|
|
||
| // 기존 데이터의 attachments와 업로드된 attachments 결합 | ||
| const allAttachments = [ | ||
| ...(data.attachments || []), | ||
| ...uploadedAttachments, | ||
| ]; | ||
|
|
||
| // 강사/조교가 학생 질문에 댓글 작성 시 트랜잭션으로 처리 | ||
| if ( | ||
| studentPostForStatus && | ||
| studentPostForStatus.status === StudentPostStatus.BEFORE | ||
| ) { | ||
| const comment = | ||
| await this.commentsRepository.createCommentWithStudentPostStatusUpdate({ | ||
| content: data.content, | ||
| studentPostId: studentPostForStatus.id, | ||
| instructorId: writerInfo.instructorId, | ||
| assistantId: writerInfo.assistantId, | ||
| enrollmentId: writerInfo.enrollmentId, | ||
| authorRole: writerInfo.authorRole, | ||
| materialIds: data.materialIds, | ||
| attachments: allAttachments.length ? allAttachments : undefined, | ||
| }); | ||
| // 일반 댓글 생성 | ||
| const result = await this.commentsRepository.create({ | ||
| ...data, | ||
| ...writerInfo, | ||
| attachments: allAttachments.length ? allAttachments : undefined, | ||
| }); | ||
| return { | ||
| ...this.addIsMineField(comment, userType, profileId), | ||
| attachments: await this.normalizeAttachments(comment.attachments), | ||
| ...this.addIsMineField(result, userType, profileId), | ||
| attachments: await this.fileStorageService.resolvePresignedUrls( | ||
| result.attachments, | ||
| ), | ||
| }; | ||
| } catch (error) { | ||
| // DB 작업 실패 시 업로드된 파일 삭제 (고아 파일 방지) | ||
| await this.fileStorageService.cleanup(uploadedAttachments); | ||
| throw error; |
There was a problem hiding this comment.
Don't clean up uploaded files when only URL signing fails.
Both catch blocks also wrap resolvePresignedUrls(). If the comment write succeeds and only presigned-URL generation fails, the DB row is already committed, but the cleanup still deletes the newly uploaded objects. Narrow the cleanup path to the repository write, or move resolvePresignedUrls() outside the try.
Also applies to: 605-624
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/comments.service.ts` around lines 455 - 503, The catch currently
cleans up uploadedAttachments for any error including failures from
fileStorageService.resolvePresignedUrls, which deletes files even when DB writes
succeeded; fix by narrowing the try to only cover repository writes (the calls
to this.commentsRepository.createCommentWithStudentPostStatusUpdate and
this.commentsRepository.create) and move resolvePresignedUrls calls
(fileStorageService.resolvePresignedUrls) outside the try so presigned-URL
failures do not trigger cleanup of uploaded files, and keep
cleanup(uploadedAttachments) only in the catch that handles repository write
failures; apply the same change to the equivalent block that uses
resolvePresignedUrls at the other location that references the same methods.
| async uploadAttachments( | ||
| files: Express.Multer.File[] | undefined, | ||
| ): Promise<{ filename: string; fileUrl: string }[]> { | ||
| if (!files?.length) return []; | ||
| const results: { filename: string; fileUrl: string }[] = []; | ||
| for (const file of files) { | ||
| results.push(await this.uploadAttachment(file)); | ||
| } | ||
| return results; |
There was a problem hiding this comment.
Clean up partial batch uploads here, not only in callers.
If the third file upload fails, the first two objects are already in storage but this method rejects before the caller ever receives their URLs, so the outer catch blocks cannot delete them. uploadAttachments() should catch locally, cleanup(results), and then rethrow.
Proposed fix
async uploadAttachments(
files: Express.Multer.File[] | undefined,
): Promise<{ filename: string; fileUrl: string }[]> {
if (!files?.length) return [];
const results: { filename: string; fileUrl: string }[] = [];
- for (const file of files) {
- results.push(await this.uploadAttachment(file));
- }
- return results;
+ try {
+ for (const file of files) {
+ results.push(await this.uploadAttachment(file));
+ }
+ return results;
+ } catch (error) {
+ await this.cleanup(results);
+ throw error;
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/filestorage.service.ts` around lines 222 - 230, The
uploadAttachments method must guard against partial successes: wrap the loop
that calls uploadAttachment(file) in a try/catch inside uploadAttachments, and
on any error call the existing cleanup(results) (or implement a cleanup function
that deletes already-uploaded fileUrls/filenames) to remove partial uploads
before rethrowing the error; ensure you still return results on success and
rethrow the original error after cleanup on failure, referencing
uploadAttachments, uploadAttachment, and cleanup to locate the changes.
| permissionService.getEffectiveInstructorId.mockImplementation( | ||
| async (type, id) => id, | ||
| ); |
There was a problem hiding this comment.
The new default assistant mock skips the authorAssistantId path.
Because getEffectiveInstructorId now returns the caller id by default, the assistant update/delete denials fail on post.instructorId !== effectiveInstructorId before CASL ever evaluates the authorAssistantId restriction. Please override it to post.instructorId in those cases so the TA-specific guard stays covered.
Also applies to: 816-830, 962-971
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/instructor-posts.service.test.ts` around lines 103 - 105, The
test mock for permissionService.getEffectiveInstructorId returns the caller id
by default and thus bypasses the TA-specific authorAssistantId check; update the
mockImplementation in the failing tests (and the other blocks around the same
file) so it returns post.instructorId when appropriate — e.g.,
permissionService.getEffectiveInstructorId.mockImplementation(async (type, id)
=> type === 'instructor' ? post.instructorId : id) — ensuring
post.authorAssistantId branches still exercise the CASL guard for assistant
updates/deletes; apply the same change to the other test ranges that exercise
TA-specific behavior.
| ({ | ||
| upload: jest.fn(), | ||
| getPresignedUrl: jest.fn(async (url) => url), | ||
| getPresignedUrl: jest.fn(async (url: string) => url), | ||
| getDownloadPresignedUrl: jest.fn(async (url: string) => url), | ||
| delete: jest.fn(), | ||
| uploadAttachment: jest.fn(async (file: Express.Multer.File) => ({ | ||
| filename: file.originalname, | ||
| fileUrl: `https://mock.s3/${file.originalname}`, | ||
| })), | ||
| uploadAttachments: jest.fn( | ||
| async (files: Express.Multer.File[] | undefined) => | ||
| (files ?? []).map((f) => ({ | ||
| filename: f.originalname, | ||
| fileUrl: `https://mock.s3/${f.originalname}`, | ||
| })), | ||
| ), | ||
| resolvePresignedUrls: jest.fn( | ||
| async < | ||
| T extends { | ||
| fileUrl: string | null; | ||
| material?: { fileUrl: string | null } | null; | ||
| }, | ||
| >( | ||
| attachments: T[] | null | undefined, | ||
| ) => | ||
| (attachments ?? []).map((a) => ({ | ||
| ...a, | ||
| fileUrl: a.fileUrl || a.material?.fileUrl || null, | ||
| })), | ||
| ), | ||
| }) as unknown as jest.Mocked<FileStorageService>; |
There was a problem hiding this comment.
Add cleanup() to this mock surface.
The production services now call fileStorageService.cleanup() in their failure paths, but this mock omits it and the cast hides the mismatch. Any test that exercises those branches will fail with cleanup is not a function.
Proposed fix
({
upload: jest.fn(),
getPresignedUrl: jest.fn(async (url: string) => url),
getDownloadPresignedUrl: jest.fn(async (url: string) => url),
delete: jest.fn(),
+ cleanup: jest.fn(async () => undefined),
uploadAttachment: jest.fn(async (file: Express.Multer.File) => ({
filename: file.originalname,
fileUrl: `https://mock.s3/${file.originalname}`,
})),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ({ | |
| upload: jest.fn(), | |
| getPresignedUrl: jest.fn(async (url) => url), | |
| getPresignedUrl: jest.fn(async (url: string) => url), | |
| getDownloadPresignedUrl: jest.fn(async (url: string) => url), | |
| delete: jest.fn(), | |
| uploadAttachment: jest.fn(async (file: Express.Multer.File) => ({ | |
| filename: file.originalname, | |
| fileUrl: `https://mock.s3/${file.originalname}`, | |
| })), | |
| uploadAttachments: jest.fn( | |
| async (files: Express.Multer.File[] | undefined) => | |
| (files ?? []).map((f) => ({ | |
| filename: f.originalname, | |
| fileUrl: `https://mock.s3/${f.originalname}`, | |
| })), | |
| ), | |
| resolvePresignedUrls: jest.fn( | |
| async < | |
| T extends { | |
| fileUrl: string | null; | |
| material?: { fileUrl: string | null } | null; | |
| }, | |
| >( | |
| attachments: T[] | null | undefined, | |
| ) => | |
| (attachments ?? []).map((a) => ({ | |
| ...a, | |
| fileUrl: a.fileUrl || a.material?.fileUrl || null, | |
| })), | |
| ), | |
| }) as unknown as jest.Mocked<FileStorageService>; | |
| ({ | |
| upload: jest.fn(), | |
| getPresignedUrl: jest.fn(async (url: string) => url), | |
| getDownloadPresignedUrl: jest.fn(async (url: string) => url), | |
| delete: jest.fn(), | |
| cleanup: jest.fn(async () => undefined), | |
| uploadAttachment: jest.fn(async (file: Express.Multer.File) => ({ | |
| filename: file.originalname, | |
| fileUrl: `https://mock.s3/${file.originalname}`, | |
| })), | |
| uploadAttachments: jest.fn( | |
| async (files: Express.Multer.File[] | undefined) => | |
| (files ?? []).map((f) => ({ | |
| filename: f.originalname, | |
| fileUrl: `https://mock.s3/${f.originalname}`, | |
| })), | |
| ), | |
| resolvePresignedUrls: jest.fn( | |
| async < | |
| T extends { | |
| fileUrl: string | null; | |
| material?: { fileUrl: string | null } | null; | |
| }, | |
| >( | |
| attachments: T[] | null | undefined, | |
| ) => | |
| (attachments ?? []).map((a) => ({ | |
| ...a, | |
| fileUrl: a.fileUrl || a.material?.fileUrl || null, | |
| })), | |
| ), | |
| }) as unknown as jest.Mocked<FileStorageService>; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/test/mocks/services.mock.ts` around lines 55 - 85, The FileStorageService
mock is missing the cleanup method causing failures when production code calls
fileStorageService.cleanup(); update the mock object (the one cast to
jest.Mocked<FileStorageService>) to include cleanup: jest.fn(async () => { /*
no-op */ }) so the mock surface matches the real interface; ensure this is added
alongside upload, getPresignedUrl, uploadAttachment, uploadAttachments,
resolvePresignedUrls, etc., so tests exercising failure paths won't throw
"cleanup is not a function".
🔗 관련 이슈
✨ 변경 사항
이번 PR에서 변경된 내용을 간단히 설명해주세요.
CASL > instructor-posts
추가로 발견된 공지사항 첨부자료 학부모 다운안되는현상 수정
추가로 발견된 공지사항 자료 직접첨부시 수정이 안되는현상(조교) 수정
https://www.notion.so/31e6f28c193e80b8bf9dfaf87267f691
🧪 테스트 방법
리뷰어가 어떻게 테스트하면 되는지 적어주세요.
📸 스크린샷 (선택)
UI 변경이 있다면 첨부해주세요.
✅ 체크리스트
Summary by CodeRabbit
New Features
Refactor
Tests