How can PHP developers optimize their code to prevent duplicate data entries when extracting information from a .csv file?

To prevent duplicate data entries when extracting information from a .csv file, PHP developers can use an associative array to store the extracted data and check for duplicates before inserting new entries. By using the unique identifier of each entry as the key in the associative array, developers can easily identify and skip duplicate entries.

<?php

$csvFile = 'data.csv';
$data = [];

if (($handle = fopen($csvFile, 'r')) !== false) {
    while (($row = fgetcsv($handle)) !== false) {
        $uniqueKey = $row[0]; // Assuming the first column is the unique identifier
        if (!isset($data[$uniqueKey])) {
            $data[$uniqueKey] = $row;
        }
    }
    fclose($handle);
}

// $data array now contains unique entries from the .csv file