博客 / 詳情

返回

【Vue2-Niubility-Uploader】一個強大的 Vue2 文件上傳解決方案

一、引言

在現代 Web 應用中,文件上傳是一個非常常見但又充滿挑戰的功能。開發者經常會遇到以下痛點:

  • 大文件上傳容易超時或失敗
  • 網絡不穩定導致上傳中斷後需要重新上傳
  • 缺乏良好的用户體驗反饋(進度、速度等)
  • 難以控制併發上傳數量
  • 不同場景需要不同的 UI 展示

vue2-niubility-uploader 正是為了解決這些痛點而誕生的一個輕量級、功能強大的 Vue2 上傳組件。它不僅提供了完整的上傳功能,還具備分片上傳、斷點續傳、拖拽上傳等高級特性,讓文件上傳變得簡單而可靠。

希望這篇文章能夠幫助你更好地理解和使用 vue2-niubility-uploader 組件,打造出色的文件上傳功能!

  • 官方文檔及Demo
  • GitHub 倉庫
  • NPM 包

二、核心特性概覽

2.1 基礎功能

單文件/多文件上傳

組件支持兩種上傳模式:單文件上傳和多文件批量上傳。通過簡單的 multiple 屬性即可切換:

<!-- 單文件上傳 -->
<Vue2NiubilityUploader :request-handler="requestHandler" />

<!-- 多文件上傳 -->
<Vue2NiubilityUploader :request-handler="requestHandler" multiple />

文件類型和大小限制

通過 acceptlimitmaxSize 屬性,可以輕鬆控制上傳文件的類型、數量和大小:

<Vue2NiubilityUploader
  :request-handler="requestHandler"
  accept="image/*,.pdf,.doc"
  :limit="10"
  :max-size="50*1024*1024"
/>

2.2 高級功能

大文件分片上傳

對於大文件上傳,組件支持自動分片,將大文件切分成多個小塊並行上傳,大大提高了上傳的可靠性和速度:

大文件分片上傳效果

<template>
  <Vue2NiubilityUploader
    ref="fileUploader"
    :request-handler="uploadChunk"
    :before-upload="initChunkUpload"
    :chunk-upload-completed="mergeChunks"
    use-chunked-upload
    :chunk-size="10*1024*1024"
    :max-concurrent-uploads="3"
    @file-upload-progress="onProgress"
  />
</template>

<script>
export default {
  methods: {
    async initChunkUpload(fileData) {
      if (!fileData.useChunked) return;

      // 初始化分片上傳,獲取 uploadId
      const response = await this.$http.post('/api/upload/init', {
        fileName: fileData.file.name,
        fileSize: fileData.file.size,
        totalChunks: fileData.chunks
      });

      // 將 uploadId 保存到擴展數據中
      fileData.extendData.uploadId = response.data.uploadId;

      // 設置已上傳的分片索引,組件在上傳分片時會跳過這些已上傳的分片
      // fileData.setUploadedChunks(fileData.id, response.data.uploadedChunks || []);
      // 如果支持斷點續傳,返回已上傳的分片列表
      return response.data;
    },

    uploadChunk({ chunk, chunkIndex, fileData: chunkFileData }) {
      const formData = new FormData();
      formData.append('file', chunk);
      formData.append('uploadId', chunkFileData.extendData.uploadId);
      formData.append('chunkIndex', chunkIndex);
      formData.append('totalChunks', chunkFileData.chunks);

      return {
        url: '/api/upload/chunk',
        method: 'POST',
        data: formData
      };
    },

    async mergeChunks(fileData) {
      // 所有分片上傳完成,合併分片
      const response = await this.$http.post('/api/upload/merge', {
        uploadId: fileData.extendData.uploadId,
        fileName: fileData.file.name,
        totalChunks: fileData.chunks
      });

      return response.data;
    },

    onProgress(fileData) {
      console.log(`${fileData.name} 上傳進度: ${fileData.progress}%`);
      console.log(`上傳速度: ${fileData.speed}`);
      console.log(`剩餘時間: ${fileData.remainingTime}`);
    }
  }
}
</script>

2.3 UI 展示

多種展示模式

組件提供了兩種主要的展示模式:

  1. 列表模式(default):適合文檔、視頻等各類文件的展示

列表模式

  1. 圖片卡片模式(picture-card):專為圖片上傳優化,支持縮略圖預覽

圖片卡片模式

<!-- 圖片卡片模式 -->
<Vue2NiubilityUploader
  :request-handler="requestHandler"
  list-type="picture-card"
  accept="image/*"
/>

實時進度反饋

每個上傳文件都會顯示:

  • 上傳進度百分比
  • 實時上傳速度
  • 預計剩餘時間

這些信息通過 FileData 對象實時更新,讓用户清楚瞭解上傳狀態。

三、技術實現原理

3.1 分片上傳機制

分片上傳是 vue2-niubility-uploader 的核心技術之一。其工作流程如下:

  1. 文件切片:將大文件按照指定的 chunkSize 切分成多個小塊
  2. 併發上傳:根據 maxConcurrentUploads 設置,併發上傳多個分片
  3. 進度追蹤:為每個分片維護獨立的上傳進度,並彙總計算總體進度
  4. 分片合併:所有分片上傳完成後,調用服務端接口合併分片

核心數據結構 FileData 包含了分片上傳所需的所有信息:

interface FileData {
  id: string;
  file: File;
  useChunked: boolean;           // 是否使用分片上傳
  chunks: number;                // 總分片數
  currentChunk: number;          // 當前上傳的分片索引
  uploadedChunks: number;        // 已上傳的分片數量
  chunkQueue: number[];          // 分片上傳隊列
  activeChunks: number;          // 當前活躍的分片上傳數
  uploadedChunkSet: Set<number>; // 已上傳分片的集合(用於斷點續傳)
  chunkProgressMap: Map;         // 每個分片的上傳進度
  // ... 其他屬性
}

3.2 斷點續傳實現

斷點續傳的關鍵在於記錄和恢復上傳狀態:

  1. 狀態記錄:使用 uploadedChunkSet 記錄已成功上傳的分片索引
  2. 進度恢復:暫停後再次上傳時,跳過已上傳的分片
  3. 分片驗證:可選擇在服務端驗證已上傳分片的完整性

實現斷點續傳的關鍵代碼邏輯:

// 上傳前檢查已上傳的分片
async onBeforeUpload(fileData) {
  if (fileData.useChunked) {
    // 初始化分片上傳,獲取已上傳的分片列表
    const response = await fetch('/api/upload/init', {
      method: 'POST',
      body: JSON.stringify({
        fileName: fileData.file.name,
        fileSize: fileData.file.size
      })
    });

    const data = await response.json();
    // 將已上傳的分片從隊列中移除
    fileData.uploadedChunkSet = new Set(data.uploadedChunks || []);
  }
}

3.3 併發控制

為了避免過多併發請求導致瀏覽器或服務器壓力過大,組件實現了智能的併發控制:

  • 全局併發限制maxConcurrentUploads 控制同時上傳的文件數量
  • 分片併發控制:對於單個大文件的多個分片,也有併發限制
  • 隊列管理:超出併發數的上傳任務會進入隊列等待

3.4 進度計算與速度預測

進度計算

組件通過監聽 XMLHttpRequest 的 progress 事件,實時更新上傳進度:

xhr.upload.addEventListener('progress', (event) => {
  if (event.lengthComputable) {
    const progress = (event.loaded / event.total) * 100;
    // 更新 FileData 的 progress 屬性
  }
});

對於分片上傳,總進度是所有分片進度的加權平均值。

速度計算

上傳速度通過採樣計算得出:

  1. 定期記錄已上傳的字節數和時間戳
  2. 計算時間間隔內的字節增量
  3. 使用移動平均算法平滑速度波動
// 速度計算示例
const currentBytes = fileData.loaded;
const currentTime = Date.now();
const deltaBytes = currentBytes - fileData.lastUploadedBytes;
const deltaTime = currentTime - fileData.lastUpdateTime;
const speed = deltaBytes / (deltaTime / 1000); // bytes/s

// 使用樣本數組平滑速度
fileData.speedSamples.push(speed);
if (fileData.speedSamples.length > 5) {
  fileData.speedSamples.shift();
}
const avgSpeed = fileData.speedSamples.reduce((a, b) => a + b) / fileData.speedSamples.length;

剩餘時間預測

基於當前速度和剩餘字節數,預測剩餘時間:

const remainingBytes = fileData.size - fileData.loaded;
const remainingTime = remainingBytes / avgSpeed; // seconds

四、 自定義 UI 展示

通過插槽完全自定義文件列表的展示:

<template>
  <Vue2NiubilityUploader
    :request-handler="requestHandler"
    multiple
  >
    <!-- 自定義文件項 -->
    <template #file-item="{ fileData }">
      <div class="custom-file-item">
        <div class="file-info">
          <img :src="getFileIcon(fileData.file)" class="file-icon" />
          <div class="file-details">
            <div class="file-name">{{ fileData.name }}</div>
            <div class="file-size">{{ formatSize(fileData.size) }}</div>
          </div>
        </div>

        <div class="file-progress" v-if="fileData.status === 'uploading'">
          <div class="progress-bar">
            <div
              class="progress-fill"
              :style="{ width: fileData.progress + '%' }"
            ></div>
          </div>
          <div class="progress-info">
            <span>{{ fileData.speed }}</span>
            <span>{{ fileData.remainingTime }}</span>
          </div>
        </div>

        <div class="file-actions">
          <button
            v-if="fileData.status === 'uploading'"
            @click="pauseUpload(fileData)"
          >
            暫停
          </button>
          <button
            v-if="fileData.status === 'paused'"
            @click="resumeUpload(fileData)"
          >
            繼續
          </button>
          <button @click="removeFile(fileData)">刪除</button>
        </div>
      </div>
    </template>
  </Vue2NiubilityUploader>
</template>

<script>
export default {
  methods: {
    getFileIcon(file) {
      const ext = file.name.split('.').pop().toLowerCase();
      const iconMap = {
        pdf: '/icons/pdf.png',
        doc: '/icons/word.png',
        docx: '/icons/word.png',
        xls: '/icons/excel.png',
        xlsx: '/icons/excel.png',
      };
      return iconMap[ext] || '/icons/file.png';
    },

    formatSize(bytes) {
      if (bytes < 1024) return bytes + ' B';
      if (bytes < 1024 * 1024) return (bytes / 1024).toFixed(2) + ' KB';
      return (bytes / 1024 / 1024).toFixed(2) + ' MB';
    }
  }
}
</script>

五、高級配置與優化

5.1 併發控制優化

在實際應用中,合理設置併發數可以顯著提升上傳效率:

<Vue2NiubilityUploader
  :request-handler="requestHandler"
  :max-concurrent-uploads="5"
  use-chunked-upload
  :chunk-size="5*1024*1024"
/>

建議配置:

  • 小文件(< 10MB):併發數 5-10
  • 大文件分片上傳:併發數 3-5
  • 移動端網絡:併發數 2-3

5.2 請求定製

requestHandler 提供了完全的請求定製能力:

requestHandler(fileData) {
  const { file, isUploadChunk, chunkIndex, chunk, fileData: chunkFileData } = fileData;

  // 根據不同條件返回不同的請求配置
  if (isUploadChunk) {
    // 分片上傳
    return {
      url: '/api/upload/chunk',
      method: 'POST',
      data: this.buildChunkFormData(chunk, chunkFileData, chunkIndex),
      headers: {
        'Authorization': `Bearer ${this.token}`,
        'X-Upload-Id': chunkFileData.extendData.uploadId
      },
      // 自定義超時時間
      timeout: 60000,
      // 自定義請求攔截器
      onUploadProgress: (progressEvent) => {
        // 可以在這裏做額外的進度處理
      }
    };
  } else {
    // 普通上傳
    return {
      url: '/api/upload',
      method: 'POST',
      data: { file, name: file.name }
    };
  }
}

5.3 錯誤處理與重試

組件內置了完善的錯誤處理機制,開發者可以通過事件監聽自定義錯誤處理:

<template>
  <Vue2NiubilityUploader
    :request-handler="requestHandler"
    @file-upload-error="onUploadError"
    @file-error="onFileError"
  />
</template>

<script>
export default {
  data() {
    return {
      retryCount: 0,
      maxRetries: 3
    }
  },

  methods: {
    async onUploadError({ fileData, error }) {
      console.error('上傳失敗:', error);

      // 自動重試邏輯
      if (this.retryCount < this.maxRetries) {
        this.retryCount++;
        this.$message.warning(`上傳失敗,正在重試 (${this.retryCount}/${this.maxRetries})`);

        // 延遲 2 秒後重試
        await new Promise(resolve => setTimeout(resolve, 2000));
        this.$refs.uploader.retryUpload(fileData);
      } else {
        this.$message.error('上傳失敗,請檢查網絡後重試');
        this.retryCount = 0;
      }
    },

    onFileError(errorInfo) {
      // 文件驗證錯誤
      const errorMessages = {
        'exceed-limit': '文件數量超出限制',
        'exceed-size': '文件大小超出限制',
        'invalid-type': '文件類型不符合要求'
      };

      this.$message.error(errorMessages[errorInfo.type] || errorInfo.message);
    }
  }
}
</script>

六、 Node.js 示例實現

const express = require('express');
const multer = require('multer');
const path = require('path');
const fs = require('fs');
const cors = require('cors');
const { formidable } = require('formidable');

// Create Express app
const app = express();
const PORT = process.env.PORT || 3001;

// Enable CORS
app.use(cors());

// Middleware to parse JSON
app.use(express.json({ limit: '50mb' }));
app.use(express.urlencoded({ extended: true, limit: '50mb' }));

// Create upload directory if it doesn't exist
const uploadDir = path.join(__dirname, 'temp');
if (!fs.existsSync(uploadDir)) {
  fs.mkdirSync(uploadDir, { recursive: true });
}

// Temporary directory for chunked uploads
const tempDir = path.join(__dirname, 'chunk-temp');
if (!fs.existsSync(tempDir)) {
  fs.mkdirSync(tempDir, { recursive: true });
}

// Configure multer for regular file uploads
const storage = multer.diskStorage({
  destination: (req, file, cb) => {
    cb(null, uploadDir);
  },
  filename: (req, file, cb) => {
    // Use original filename with timestamp to avoid conflicts
    const mimeType = file.mimetype;
    const fileName = 'img.' + mimeType.split('/').pop().toLowerCase();
    console.log('multer.diskStorage, filename', fileName, file);
    const ext = path.extname(file.originalname || fileName);
    const name = path.basename(file.originalname || fileName, ext);
    const filename = `${name}_${Date.now()}${ext}`;
    cb(null, filename);
  }
});

const upload = multer({
  storage: storage,
  limits: {
    fileSize: 10 * 1024 * 1024 * 1024 // 10GB max file size
  }
});

// In-memory storage for upload sessions (in production, use Redis or database)
const uploadSessions = new Map();


/**
 * GET /health - Health check endpoint
 */
app.get('/health', (req, res) => {
  res.json({ status: 'OK', timestamp: new Date().toISOString() });
});

/**
 * POST /upload - Single file upload endpoint
 */
app.post('/upload', upload.single('file'), (req, res) => {
  try {
    if (!req.file) {
      return res.status(400).json({ error: 'No file uploaded' });
    }

    let name = req.name;
    // Return success response with file info
    res.json({
      success: true,
      message: 'File uploaded successfully',
      file: {
        filename: req.file.filename || name,
        originalName: req.file.originalname || name,
        size: req.file.size,
        path: req.file.path
      }
    });
  } catch (error) {
    console.error('Upload error:', error);
    res.status(500).json({ error: 'Failed to upload file' });
  }
});

/**
 * POST /upload/init - Initialize a chunked upload session
 */
app.post('/upload/init', async (req, res) => {
  try {
    const { fileName, fileSize, fileType, uploadId } = req.body;
    console.log('/upload/init', req.body);

    if (!fileName || !fileSize) {
      return res.status(400).json({ error: 'Missing required fields: fileName, fileSize' });
    }

    // Create session data
    const session = {
      uploadId,
      fileName,
      fileSize: parseInt(fileSize),
      fileType: fileType || '',
      uploadedSize: 0,
      totalChunks: 0,
      uploadedChunks: new Set(),
      createdAt: new Date().toISOString(),
      expiresAt: new Date(Date.now() + 60 * 60 * 1000).toISOString(), // 1 hours
      tempFilePath: path.join(tempDir, uploadId)
    };

    const tempFilePath = path.resolve(__dirname, `./chunk-temp/${uploadId}`)
    console.log('/upload/init, tempFilePath', fs.existsSync(session.tempFilePath), session.tempFilePath, tempFilePath);
    // Create temporary directory for this upload
    if (!fs.existsSync(tempFilePath)) {
      try {
        fs.mkdirSync(tempFilePath, { recursive: true });
      } catch (err) {
        console.error('創建文件夾失敗', err);
      }

    }

    // Store session
    uploadSessions.set(uploadId, session);

    // Clean up expired sessions periodically
    if (uploadSessions.size > 100) { // Clean up if we have too many sessions
      console.log('/upload/init 清空session')
      const now = Date.now();
      for (const [id, session] of uploadSessions) {
        if (new Date(session.expiresAt).getTime() < now) {
          cleanupUploadSession(id);
        }
      }
    }

    // Return session info
    res.json({
      success: true,
      uploadId,
      message: 'Upload session initialized successfully'
    });

  } catch (error) {
    console.error('Init upload error:', error);
    res.status(500).json({ error: 'Failed to initialize upload session' });
  }
});

/**
 * 跨分區移動文件
 * @param sourcePath 源文件地址
 * @param targetPath 目標文件地址
 * @returns {Promise<void>}
 */
async function moveFileAcrossPartitions(sourcePath, targetPath) {
  try {
    // 確保目標目錄存在
    const targetDir = path.dirname(targetPath);
    fs.mkdirSync(targetDir, { recursive: true });

    // 創建可讀流和可寫流
    const readStream = fs.createReadStream(sourcePath);
    const writeStream = fs.createWriteStream(targetPath);

    // 管道傳輸數據
    await new Promise((resolve, reject) => {
      readStream.pipe(writeStream)
        .on('finish', resolve)
        .on('error', reject);
    });

    // 刪除源文件
    fs.unlinkSync(sourcePath);

    console.log(`文件移動成功(跨分區),源文件:${sourcePath},目標文件:${targetPath}`);
  } catch (err) {
    console.error('移動文件失敗:', err);
  }
}

app.post('/upload/chunk', async (req, res) => {
  try {

    const form = formidable({
      multiples: false,
      // maxFileSize: 100 * 1024 * 1024 // 100MB
    });

    form.parse(req, async (err, fields, files) => {
      if (err) {
        return res.status(500).json({
          success: false,
          message: '解析表單失敗: ' + err.message
        });
      }

      try {
        // console.log('fields', fields);
        const { uploadId, chunkIndex, filename, chunk, totalChunks } = fields;
        const chunkFiles = files.file || [];

        const chunkIndexInt = parseInt(chunkIndex[0]);
        const totalChunksInt = parseInt(totalChunks[0]);
        // console.log('chunkFiles', chunkFiles);
        if (chunkFiles.length == 0) {
          return res.status(400).json({
            success: false,
            message: '未收到分片文件'
          });
        }


        if (!uploadId[0] || isNaN(chunkIndexInt) || isNaN(totalChunks)) {
          return res.status(400).json({ error: 'Missing required fields: uploadId, chunkIndex, totalChunks' });
        }

        // Check if upload session exists
        const session = uploadSessions.get(uploadId[0]);
        if (!session) {
          return res.status(404).json({ error: 'Upload session not found' });
        }

        // Check if chunk was already uploaded
        if (session.uploadedChunks.has(chunkIndexInt)) {
          return res.json({
            success: true,
            message: 'Chunk already uploaded',
            chunkIndex: chunkIndexInt,
            status: 'duplicate'
          });
        }

        // 移動臨時文件到目標位置
        const chunkPath = path.join(session.tempFilePath, `chunk_${chunkIndexInt}.tmp`);
        // fs.renameSync(chunkFiles[0].filepath, chunkPath);
        await moveFileAcrossPartitions(chunkFiles[0].filepath, chunkPath);


        // Update session with chunk info
        session.uploadedChunks.add(chunkIndexInt);
        // session.uploadedSize += req.file.size;
        session.uploadedSize += chunkFiles[0].length || 0;
        session.totalChunks = totalChunksInt;

        // Update expiration time
        session.expiresAt = new Date(Date.now() + 60 * 60 * 1000).toISOString();

        // Return success response
        res.json({
          success: true,
          message: 'Chunk uploaded successfully',
          chunkIndex: chunkIndexInt,
          totalChunks: totalChunksInt,
          uploadedSize: session.uploadedSize,
          progress: Math.round((session.uploadedSize / session.fileSize) * 100)
        });

      } catch (error) {
        console.error(error);
        res.status(500).json({
          success: false,
          message: '分片上傳失敗: ' + error.message
        });
      }
    });

  } catch (error) {
    console.error('Chunk upload error:', error);
    res.status(500).json({ error: 'Failed to upload chunk' });
  }
});

/**
 * POST /upload/finalize - Finalize a chunked upload
 */
app.post('/upload/finalize', async (req, res) => {
  try {
    const { uploadId, fileName, fileSize } = req.body;

    if (!uploadId) {
      return res.status(400).json({ error: 'Missing required field: uploadId' });
    }

    // Check if upload session exists
    const session = uploadSessions.get(uploadId);
    // console.log('/upload/finalize, session', uploadId, session, uploadSessions);
    if (!session) {
      return res.status(404).json({ error: 'Upload session not found' });
    }

    // Verify all chunks were uploaded
    if (session.uploadedChunks.size !== session.totalChunks) {
      const missingChunks = [];
      for (let i = 0; i < session.totalChunks; i++) {
        if (!session.uploadedChunks.has(i)) {
          missingChunks.push(i);
        }
      }

      return res.status(400).json({
        error: 'Not all chunks have been uploaded',
        missingChunks,
        uploadedChunks: Array.from(session.uploadedChunks),
        totalChunks: session.totalChunks
      });
    }

    // Verify file size matches
    if (fileSize && parseInt(fileSize) !== session.fileSize) {
      return res.status(400).json({
        error: 'File size mismatch',
        expected: session.fileSize,
        actual: fileSize
      });
    }

    // Reassemble the file from chunks
    const finalFilePath = path.join(tempDir, session.fileName);
    const writeStream = fs.createWriteStream(finalFilePath);

    // Sort chunks by index and pipe them in order
    const chunkFiles = fs.readdirSync(session.tempFilePath);
    const sortedChunks = chunkFiles
      .filter(f => f.startsWith('chunk_'))
      .sort((a, b) => {
        const indexA = parseInt(a.split('_')[1]);
        const indexB = parseInt(b.split('_')[1]);
        return indexA - indexB;
      });

    let chunksProcessed = 0;

    // console.log('/upload/finalize, sortedChunks', sortedChunks);
    // Process each chunk in sequence
    for (const chunkFile of sortedChunks) {
      const chunkPath = path.join(session.tempFilePath, chunkFile);
      const chunkData = fs.readFileSync(chunkPath);

      if (!writeStream.write(chunkData)) {
        // If the stream wants us to wait, wait until it's ready
        await new Promise(resolve => writeStream.once('drain', resolve));
      }

      chunksProcessed++;
    }

    // Close the write stream
    writeStream.end();

    // Wait for the stream to finish writing
    await new Promise((resolve, reject) => {
      writeStream.on('finish', resolve);
      writeStream.on('error', reject);
    });

    // Verify the final file size
    const finalStats = fs.statSync(finalFilePath);
    if (finalStats.size !== session.fileSize) {
      // Clean up and return error
      fs.unlinkSync(finalFilePath);
      cleanupUploadSession(uploadId);
      return res.status(500).json({
        error: 'Final file size does not match expected size',
        expected: session.fileSize,
        actual: finalStats.size,
        finalFilePath
      });
    }

    // Clean up temporary files
    cleanupUploadSession(uploadId);

    // Return success response
    res.json({
      success: true,
      message: 'File uploaded successfully',
      file: {
        filename: session.fileName,
        size: finalStats.size,
        path: finalFilePath
      }
    });

  } catch (error) {
    console.error('Finalize upload error:', error);
    res.status(500).json({ error: 'Failed to finalize upload' });
  }
});

/**
 * Clean up upload session and temporary files
 * @param {string} uploadId - The upload session ID
 */
function cleanupUploadSession(uploadId) {
  const session = uploadSessions.get(uploadId);
  // console.log('cleanupUploadSession', uploadId, session);
  if (session) {
    // Remove temporary directory
    if (fs.existsSync(session.tempFilePath)) {
      console.log('cleanupUploadSession刪除臨時目錄', session.tempFilePath);
      try {
        fs.rmSync(session.tempFilePath, { recursive: true });
      } catch (error) {
        console.error(`Failed to remove temp directory for ${uploadId}:`, error);
      }
    }

    // Remove session from map
    uploadSessions.delete(uploadId);
  }
}

// Periodic cleanup of expired sessions (every hour)
setInterval(() => {
  const now = Date.now();
  for (const [id, session] of uploadSessions) {
    if (new Date(session.expiresAt).getTime() < now) {
      console.log(`Cleaning up expired upload session: ${id}`);
      cleanupUploadSession(id);
    }
  }
}, 60 * 60 * 1000); // Every hour

// Start server
app.listen(PORT, () => {
  console.log(`服務器運行在端口 ${PORT}`)
  console.log(`文件上傳服務器運行在端口 ${PORT}`)
  console.log(`服務地址: http://localhost:${PORT}`)
  console.log(`Upload directory: ${uploadDir}`);
  console.log(`Temp directory: ${tempDir}`);
  if (!fs.existsSync(tempDir)) {
    fs.mkdirSync(tempDir, { recursive: true });
  }
  if (!fs.existsSync(uploadDir)) {
    fs.mkdirSync(uploadDir, { recursive: true });
  }
});

module.exports = app;
user avatar
0 位用戶收藏了這個故事!

發佈 評論

Some HTML is okay.