」工欲善其事,必先利其器。「—孔子《論語.錄靈公》
首頁 > 程式設計 > 使用 VGG 進行人臉和性別識別

使用 VGG 進行人臉和性別識別

發佈於2024-11-12
瀏覽:649

Using VGGfor face and gender recognition

How to build a face and gender recognition Python project using deep learning and VGG16.

What is deep learning?

Deep learning is a subcategory of machine learning, a neural network with three or more layers. These neural networks try to simulate the behavior of the human brain by learning from large amounts of data. While a neural network with a single layer can still make approximate predictions, additional hidden layers can help to optimize and refine for accuracy.

Deep learning improves automation by performing tasks without human intervention. Deep learning can be found in digital assistants, voice-enabled TV remotes, credit card fraud detection, and self-driving cars.

Building the Python project

** Check out the full code on GitHub: https://github.com/alexiacismaru/face-recognision

Download the VGG16 Face Dataset and the Haar Cascade XML file used for face detection which will be used for the preprocessing in the face recognition task.

faceCascade = cv2.CascadeClassifier(os.path.join(base_path, "haarcascade_frontal_face_default.xml")) # haar cascade detects faces in images

vgg_face_dataset_url = "http://www.robots.ox.ac.uk/~vgg/data/vgg_face/vgg_face_dataset.tar.gz"

with request.urlopen(vgg_face_dataset_url) as r, open(os.path.join(base_path, "vgg_face_dataset.tar.gz"), 'wb') as f:
  f.write(r.read())

# extract VGG dataset
with tarfile.open(os.path.join(base_path, "vgg_face_dataset.tar.gz")) as f:
  f.extractall(os.path.join(base_path))

# download Haar Cascade for face detection
trained_haarcascade_url = "https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades/haarcascade_frontalface_default.xml"
with request.urlopen(trained_haarcascade_url) as r, open(os.path.join(base_path, "haarcascade_frontalface_default.xml"), 'wb') as f:
    f.write(r.read())

Selectively load and process a specific number of images for a set of predefined subjects from the VGG Face Dataset.

# populate the list with the files of the celebrities that will be used for face recognition
all_subjects = [subject for subject in sorted(os.listdir(os.path.join(base_path, "vgg_face_dataset", "files"))) if subject.startswith("Jesse_Eisenberg") or subject.startswith("Sarah_Hyland") or subject.startswith("Michael_Cera") or subject.startswith("Mila_Kunis") and subject.endswith(".txt")]

# define number of subjects and how many pictures to extract
nb_subjects = 4
nb_images_per_subject = 40

Iterate through each subject’s file by opening a text file associated with the subject and reading the contents. Each line in these files contains a URL to an image. For each URL (which points to an image), the code tries to load the image using urllib and convert it into a NumPy array.

images = []

for subject in all_subjects[:nb_subjects]:
  with open(os.path.join(base_path, "vgg_face_dataset", "files", subject), 'r') as f:
    lines = f.readlines()

  images_ = []
  for line in lines:
    url = line[line.find("http://"): line.find(".jpg")   4]

    try:
      res = request.urlopen(url)
      img = np.asarray(bytearray(res.read()), dtype="uint8")
      # convert the image data into a format suitable for OpenCV
      # images are colored 
      img = cv2.imdecode(img, cv2.IMREAD_COLOR)
      h, w = img.shape[:2]
      images_.append(img)
      cv2_imshow(cv2.resize(img, (w // 5, h // 5)))

    except:
      pass

    # check if the required number of images has been reached
    if len(images_) == nb_images_per_subject:
      # add the list of images to the main images list and move to the next subject
      images.append(images_)
      break

Face detection set up

Using VGGfor face and gender recognition

  1. Locate one or more faces in the image and put it in a box.
  2. Make sure the face is consistent with the database, such as geometry and photometrics.
  3. Extract features from the face that can be used for the recognition task.
  4. Match the face to one or more known faces in a prepared database.
# create arrays for all 4 celebrities
jesse_images = []
michael_images = []
mila_images = []
sarah_images = []

faceCascade = cv2.CascadeClassifier(os.path.join(base_path, "haarcascade_frontalface_default.xml"))

# iterate over the subjects
for subject, images_ in zip(all_subjects, images):

  # create a grayscale copy to simplify the image and reduce computation
  for img in images_:
    img_ = img.copy()
    img_gray = cv2.cvtColor(img_, cv2.COLOR_BGR2GRAY)
    faces = faceCascade.detectMultiScale(
        img_gray,
        scaleFactor=1.2,
        minNeighbors=5,
        minSize=(30, 30),
        flags=cv2.CASCADE_SCALE_IMAGE
    )
    print("Found {} face(s)!".format(len(faces)))

    for (x, y, w, h) in faces:
        cv2.rectangle(img_, (x, y), (x w, y h), (0, 255, 0), 10)

    h, w = img_.shape[:2]
    resized_img = cv2.resize(img_, (224, 224))
    cv2_imshow(resized_img)

    if "Jesse_Eisenberg" in subject:
        jesse_images.append(resized_img)
    elif "Michael_Cera" in subject:
        michael_images.append(resized_img)
    elif "Mila_Kunis" in subject:
        mila_images.append(resized_img)
    elif "Sarah_Hyland" in subject:
        sarah_images.append(resized_img)

The detectMultiScale method recognizes faces in the image. It then returns the coordinates of rectangles where it believes faces are located. For each face, a rectangle is drawn around it in the image, indicating the face’s location. Each image is resized to 224x224 pixels.

Split the dataset into a training and validation set:

  • The training set is used to train the machine learning model. It’s used to learn the patterns, features, and relationships within the data. The model adjusts its parameters to minimize errors in predictions or classifications made on the training data.
  • The validation set evaluates the model’s performance on a new set of data. This helps in checking how well the model generalizes to unseen data. The validation set should be an independent set that is not used during the training of the model(s). Mixing/using information from the validation set during training can lead to skewed results.
# create directories for saving faces
for person in ['train/male', 'train/female', 'valid/male', 'valid/female']:
  os.makedirs(os.path.join(base_path, "faces", person), exist_ok=True)
# 'exist_ok=True' parameter allows the function to run without error even if some directories already exist

def split_images(images, train_size):
    training_images = images[:train_size]
    validation_images = images[train_size:train_size   10]
    return training_images, validation_images

michael_training, michael_testing = split_images(michael_images, 20)
mila_training, mila_testing = split_images(mila_images, 20)

jesse_testing = jesse_images[:10]
sarah_testing = sarah_images[:10]

# Save the pictures to an individual filename
def save_faces(images, directory, firstname, lastname):
    for i, img in enumerate(images):
        filename = os.path.join(base_path, "faces", directory, f"{firstname}_{lastname}_{i}.jpg")
        cv2.imwrite(filename, img)

# Save the split images
save_faces(michael_training, 'train/male', 'Michael', 'Cera')
save_faces(michael_testing, 'valid/male', 'Michael', 'Cera')
save_faces(mila_training, 'train/female', 'Mila', 'Kunis')
save_faces(mila_testing, 'valid/female', 'Mila', 'Kunis')

# Since Jesse and Sarah are only for testing, save them directly to the test directory
save_faces(jesse_testing, 'valid/male', 'Jesse', 'Eisenberg')
save_faces(sarah_testing, 'valid/female', 'Sarah', 'Hyland')

Data Augmentation

The accuracy of deep learning models depends on the quality, quantity, and contextual meaning of training data. This is one of the most common challenges in building deep learning models and it can be costly and time-consuming. Companies use data augmentation to reduce dependency on training examples to build high-precision models quickly.

Data augmentation means artificially increasing the amount of data by generating new data points from existing data. This includes adding minor alterations to data or using machine learning models to generate new data points in the latent space of original data to amplify the dataset.

Synthetics represent artificially generated data without using real-world images and it’s produced by Generative Adversarial Networks.

Augmented derives from original images with some sort of minor geometric transformations (such as flipping, translation, rotation, or the addition of noise) to increase the diversity of the training set.

pipeline_male = Augmentor.Pipeline(source_directory='/content/sample_data/deep_learning_assignment/faces/train/male', output_directory='/content/sample_data/deep_learning_assignment/faces/train_augmented/male')
pipeline_male.flip_left_right(probability=0.7)
pipeline_male.rotate(probability=0.7, max_left_rotation=10, max_right_rotation=10)
pipeline_male.greyscale(probability=0.1)
pipeline_male.sample(50)

pipeline_female = Augmentor.Pipeline(source_directory='/content/sample_data/deep_learning_assignment/faces/train/female', output_directory='/content/sample_data/deep_learning_assignment/faces/train_augmented/female')
pipeline_female.flip_left_right(probability=0.7)
pipeline_female.rotate(probability=0.7, max_left_rotation=10, max_right_rotation=10)
pipeline_female.greyscale(probability=0.1)
pipeline_female.sample(50)

Data augmentation improves the performance of ML models through more diverse datasets and reduces operation costs related to data collection:

  • Flip Left-Right: Images are randomly flipped horizontally with a probability of 0.7. This simulates the variation due to different orientations of subjects in images.
  • Rotation: The images are rotated slightly (up to 10 degrees in both directions) with a probability of 0.7. This adds variability to the dataset by simulating different head poses.
  • Greyscale Conversion: With a probability of 0.1, the images are converted to greyscale. This ensures the model can process and learn from images irrespective of their color information.
  • Sampling: The sample(50) method generates 50 augmented images from the original set. This expands the dataset, providing more data for the model to learn from.

Implementing the VGG16 model

VGG16 is a convolutional neural network widely used for image recognition. It is considered to be one of the best computer vision model architectures. It consists of 16 layers of artificial neurons that process the image incrementally to improve accuracy. In VGG16, “VGG” refers to the Visual Geometry Group of the University of Oxford, while “16” refers to the network’s 16 weighted layers.

VGG16 is used for image recognition and classification of new images. The pre-trained version of the VGG16 network is trained on over one million images from the ImageNet visual database. VGG16 can be applied to determine whether an image contains certain items, animals, plants, and more.

VGG16 architecture

Using VGGfor face and gender recognition

There are 13 convolutional layers, five Max Pooling layers, and three Dense layers. This results in 21 layers with 16 weights, meaning it has 16 learnable parameter layers. VGG16 takes input tensor size as 224x244. The model focuses on having convolution layers of a 3x3 filter with stride 1. It always uses the same padding with a maxpool layer of 2x2 filter of stride 2.

Conv-1 Layer has 64 filters, Conv-2 has 128 filters, Conv-3 has 256 filters, Conv 4 and Conv 5 have 512 filters, and three fully connected layers where the first two have 4096 channels each, the third performs 1000-way ILSVRC classification and contains 1000 channels (one for each class). The final layer is the soft-max layer.

Start preparing the base model.

base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))

# Set the layers of the base model to be non-trainable
for layer in base_model.layers:
    layer.trainable = False

To make sure that the model will classify the images correctly, we need to extend the model with additional layers.

x = base_model.output
x = GlobalAveragePooling2D()(x)

# dense layers
x = Dense(1024, activation='relu')(x)
x = Dense(512, activation='relu')(x)
x = Dense(256, activation='relu')(x)
# add a logistic layer for binary classification
x = Dense(1, activation='sigmoid')(x)

model = Model(inputs=base_model.input, outputs=x)

The Global Average Pooling 2D layer condenses the feature maps obtained from VGG16 into a single 1D vector per map. It simplifies the output and reduces the total number of parameters, aiding in the prevention of overfitting.

The Dense layers are a sequence of fully connected (Dense) layers that are added. Each layer contains a specified number of units (1024, 512, and 256), chosen based on common practices and experimentation. These layers further process the features extracted by VGG16.

The final Dense layer (the Output layer) uses sigmoid activation suitable for binary classification (our two classes being ‘female’ and ‘male’).

Adam Optimization

The Adam Optimization algorithm is an extension of the stochastic gradient descent procedure to update network weights iterative based on training data. The method is efficient when working with large problems involving a lot of data or parameters. It requires less memory and is efficient.

This algorithm combines two gradient descent methodologies: momentum and Root Mean Square Propagation (RMSP).

Momentum is an algorithm used to help accelerate the gradient descent algorithm using the exponentially weighted average of the gradients.

Using VGGfor face and gender recognition

Root mean square prop is an adaptive learning algorithm that tries to improve the AdaGrad by taking the “exponential moving average”.

Using VGGfor face and gender recognition

Since mt and vt have both initialized as 0 (based on the above methods), it is observed that they gain a tendency to be ‘biased towards 0’ as both β1 & β2 ≈ 1. This Optimizer fixes this problem by computing ‘bias-corrected’ mt and vt. This is also done to control the weights while reaching the global minimum to prevent high oscillations when near it. The formulas used are:

Using VGGfor face and gender recognition

Intuitively, we are adapting to the gradient descent after every iteration so that it remains controlled and unbiased throughout the process, hence the name Adam.

Now, instead of our normal weight parameters mt and vt, we take the bias-corrected weight parameters (m_hat)t and (v_hat)t. Putting them into our general equation, we get:

Using VGGfor face and gender recognition

Source: Geeksforgeeks, https://www.geeksforgeeks.org/adam-optimizer/

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()

Set up image data preprocessing, augmentation, and model training in a deep learning context.

train_datagen = ImageDataGenerator()

training_set = train_datagen.flow_from_directory(
    '/content/sample_data/deep_learning_assignment/faces/train_augmented',
    batch_size=30,
    class_mode='binary'
)

validation_datagen = ImageDataGenerator() # used for real-time data augmentation and preprocessing
# generates batches of tensor image data with real-time data augmentation 

validation_set = validation_datagen.flow_from_directory(
    '/content/sample_data/deep_learning_assignment/faces/valid',
    batch_size=30,
    class_mode='binary'
)

model.fit(training_set, epochs=10, validation_data=validation_set)
  • epochs: the number of epochs specifies how much the entire training dataset will be passed forward and backward through the neural network. The model will go through the training data 10 times. An epoch is one complete presentation of the data set to be learned to a learning machine.
  • batch_size: this parameter defines the number of samples that are propagated through the network at one time. Here, we are using a batch size of 30, meaning the model will take 30 images at a time, process them, update the weights, and then proceed to the next batch of 30 images.

The model’s performance is evaluated by making predictions on the validation set. This gives an idea of how well the model performs unseen data. A threshold is applied to these predictions to classify each image into one of two classes (“male” or “female”).

# Evaluate the model on the validation set
validation_loss, validation_accuracy = model.evaluate(validation_set)

print(f"Validation Accuracy: {validation_accuracy * 100:.2f}%")
print(f"Validation Loss: {validation_loss}")

# Make predictions on the validation set
validation_predictions = model.predict(validation_set)

# Apply threshold to determine class
threshold = 0.5
predicted_classes = (validation_predictions > threshold).astype(int)

# Display the predicted classes along with image names
for i in range(len(validation_set.filenames)):
    filename = validation_set.filenames[i]
    prediction = predicted_classes[i][0]  # Binary predictions, extract single value

    class_name = 'male' if prediction == 0 else 'female'
    print(f"Image: {filename}, Predicted Class: {class_name}\n")

Create a confusion matrix to visualize the accuracy.

actual_labels = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
predictions =   [1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1]

cm = confusion_matrix(actual_labels, predictions)

sns.heatmap(cm, annot=True, fmt='d')
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.show()

For binary classification, the Receiver Operating Characteristic (ROC) curve and Area Under Curve (AUC) are useful to understand the trade-offs between true positive rate and false positive rate.

fpr, tpr, thresholds = roc_curve(actual_labels, predictions)
roc_auc = auc(fpr, tpr)

plt.figure()
plt.plot(fpr, tpr, color='darkorange', lw=2, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
plt.show()

Conclusion

In conclusion, by using deep learning and image processing algorithms you can build a Python project that recognizes human faces and can categorize them as either male or female.

版本聲明 本文轉載於:https://dev.to/alexia_cismaru_f49ed201c2/using-vgg16-for-face-and-gender-recognition-38a2?1如有侵犯,請聯絡[email protected]刪除
最新教學 更多>
  • 如何使用 jQuery 製作背景顏色動畫?
    如何使用 jQuery 製作背景顏色動畫?
    使用 jQuery 淡化背景顏色引人注目的網站元素通常需要微妙的動畫,例如淡入和淡出。雖然 jQuery 廣泛用於動畫文字內容,但它也可用於動態增強背景顏色。 在 jQuery 中淡入/淡出背景顏色進行操作要使用 jQuery 設定元素的背景顏色,您首先需要合併 jQueryUI 函式庫。整合後,可...
    程式設計 發佈於2024-11-17
  • 開源軟體專案的免費人工智慧程式碼審查
    開源軟體專案的免費人工智慧程式碼審查
    如果您參與開源軟體,您就會知道程式碼審查的重要性。它們不僅僅是捕捉錯誤,還確保程式碼品質、安全性和可維護性,幫助每個貢獻者無縫協作。但讓我們面對現實吧,程式碼審查非常耗時。手動審查每個拉取請求 (PR) 可能會減慢開發速度,尤其是在資源有限的開源專案中。 Bito 的人工智慧程式碼審查代理——一種...
    程式設計 發佈於2024-11-17
  • 是否可以在 PHP 重定向中設定自訂標頭?
    是否可以在 PHP 重定向中設定自訂標頭?
    PHP 重定向中的自訂標頭:不可能的請求使用PHP 重定向到頁面時,您可能會在嘗試通過時遇到挑戰以及帶有重定向的自訂HTTP 標頭。重定向的標準方法涉及使用 header("Location: http://...") 語法。然而,這種方法只為觸發重定向的回應設定標頭,而不是為重...
    程式設計 發佈於2024-11-17
  • 如何用CSS消除影像間距?
    如何用CSS消除影像間距?
    透過 CSS 消除圖像間距在 HTML 中,當連續放置多個圖像時,它們之間會出現一個空格。在某些設計場景中,這可能會造成視覺破壞。雖然有許多解決方法,例如手動換行或 HTML 註釋,但有一個使用 CSS 的優雅解決方案。 要有效刪除圖片之間的空白,請利用以下 CSS 屬性:img { displ...
    程式設計 發佈於2024-11-17
  • 如何在 PHP 中組合兩個關聯數組,同時保留唯一 ID 並處理重複名稱?
    如何在 PHP 中組合兩個關聯數組,同時保留唯一 ID 並處理重複名稱?
    在 PHP 中組合關聯數組在 PHP 中,將兩個關聯數組組合成一個數組是常見任務。考慮以下請求:問題描述:提供的代碼定義了兩個關聯數組,$array1 和 $array2。目標是建立一個新陣列 $array3,它合併兩個陣列中的所有鍵值對。 此外,提供的陣列具有唯一的 ID,而名稱可能重疊。要求是建...
    程式設計 發佈於2024-11-17
  • 大批
    大批
    方法是可以在物件上呼叫的 fns 數組是對象,因此它們在 JS 中也有方法。 slice(begin):將陣列的一部分提取到新數組中,而不改變原始數組。 let arr = ['a','b','c','d','e']; // Usecase: Extract till index ...
    程式設計 發佈於2024-11-17
  • 如何在 Keras 中實現自己的損失函數?
    如何在 Keras 中實現自己的損失函數?
    Keras 中的自訂損失函數實作在 Keras 中,可以實現自訂損失函數來滿足特定的訓練要求。其中一個函數是骰子誤差係數,它測量真實標籤和預測標籤之間的重疊。 要在 Keras 中建立自訂損失函數,請依照下列步驟操作:1。實作係數函數骰子誤差係數可以寫成:dice coefficient = (2 ...
    程式設計 發佈於2024-11-17
  • Bootstrap 4 Beta 中的列偏移發生了什麼事?
    Bootstrap 4 Beta 中的列偏移發生了什麼事?
    Bootstrap 4 Beta:列偏移的刪除和恢復Bootstrap 4 在其Beta 1 版本中引入了重大更改柱子偏移了。然而,隨著 Beta 2 的後續發布,這些變化已經逆轉。 從 offset-md-* 到 ml-auto在 Bootstrap 4 Beta 1 中, offset-md-*...
    程式設計 發佈於2024-11-17
  • Go如何在沒有傳統機制的情況下實現多型?
    Go如何在沒有傳統機制的情況下實現多型?
    探討Go語言中的多態性在物件導向程式設計中,多態性允許物件根據其類別表現出不同的行為。但在Go中,多態性的概念並不是傳統意義上的實現。讓我們深入探討一下這背後的原因,探討如何在 Go 中實現類似的功能。 為什麼 Go 缺乏傳統的多態性Go 不是傳統的物件導向語言。它採用了不同的方法,使用:組合:由其...
    程式設計 發佈於2024-11-17
  • 如何在Java中正確透過套接字傳輸檔案?
    如何在Java中正確透過套接字傳輸檔案?
    Java 透過套接字傳輸檔案:傳送和接收位元組數組Java 透過套接字傳輸檔案:傳送和接收位元組數組在Java 中,透過套接字傳輸檔案涉及將檔案轉換為位元組數組,透過套接字發送它們,然後在接收端將位元組轉換回檔案。本文解決了 Java 開發人員在實作此文件傳輸功能時遇到的問題。 伺服器端問題byte...
    程式設計 發佈於2024-11-17
  • 如何在 JavaScript 中格式化數字以顯示最少的小數位數?
    如何在 JavaScript 中格式化數字以顯示最少的小數位數?
    在JavaScript 中格式化數字關於在JavaScript 中格式化數字的查詢,您可以利用內建函數toLocaleString() 和minimumFractionDigits選項。 toLocaleString() 方法可讓您根據使用者的區域設定或指定的區域設定格式化數字。透過將minimum...
    程式設計 發佈於2024-11-17
  • 如何在 Go 中將數字轉換為字母?
    如何在 Go 中將數字轉換為字母?
    在Go 中將數字轉換為字母了解了將數字轉換為字母的需要,讓我們探索在Go 中實現這一目標的各種方法.數字到符文的轉換一種簡單的方法是將數字添加到常量'A' - 1,其中每個數字相加代表字母表中的一個字母。例如,加 1 得到“A”,加 2 得到“B”。 func toChar(i in...
    程式設計 發佈於2024-11-17
  • 如何在 PHP 中提取不含副檔名的檔名?
    如何在 PHP 中提取不含副檔名的檔名?
    在PHP 中提取不帶擴展名的文件名使用神奇常數__FILE__ 可以輕鬆獲取PHP 中當前執行腳本的文件名。但是,如果您需要提取不含副檔名的檔案名,例如“.php”後綴,則過程略有不同。 basename() 解決方案:若要使用basename()函數刪除副檔名,您可以:basename(__FIL...
    程式設計 發佈於2024-11-17
  • 如何在 PHP 和 MySQL 中同步時區?
    如何在 PHP 和 MySQL 中同步時區?
    在PHP 和MySQL 中同步時區您正在開發一個需要使用PHP date() 函數在MySQL 中儲存日期的應用程式。有必要使用 NOW() 在 MySQL 中比較這些日期來計算時間差異。但是,PHP date() 函數使用 PHP 中定義的時區,而 NOW() 使用 MySQL 伺服器中配置的時區...
    程式設計 發佈於2024-11-17
  • 如何使用準備好的語句在 PHP MySQLi 中準備安全更新查詢?
    如何使用準備好的語句在 PHP MySQLi 中準備安全更新查詢?
    如何為更新查詢準備語句為了增強使用PHP MySQLi 查詢更新資料庫時的資料安全性,建議採用準備好的聲明。雖然 PHP 文件提供了有關 bind_param() 的信息,但它缺少特定於更新查詢的範例。 讓我們深入研究如何為更新查詢制定準備好的語句:準備查詢語句:將更新查詢中的所有變數替換為問題標記...
    程式設計 發佈於2024-11-17

免責聲明: 提供的所有資源部分來自互聯網,如果有侵犯您的版權或其他權益,請說明詳細緣由並提供版權或權益證明然後發到郵箱:[email protected] 我們會在第一時間內為您處理。

Copyright© 2022 湘ICP备2022001581号-3