How can PHP be used to automate the process of extracting and saving data from multiple webpages?

To automate the process of extracting and saving data from multiple webpages using PHP, you can use a combination of web scraping techniques and file handling functions. You can use PHP's cURL library to fetch the webpage content, then use DOMDocument or SimpleXMLElement to parse the HTML and extract the desired data. Finally, you can save the extracted data to a file using PHP's file handling functions like fwrite.

<?php
// URL of the webpage to scrape
$url = 'https://example.com';

// Initialize cURL session
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

// Execute cURL session
$response = curl_exec($ch);

// Close cURL session
curl_close($ch);

// Parse the HTML content
$dom = new DOMDocument();
$dom->loadHTML($response);

// Extract data from the webpage
$data = $dom->getElementsByTagName('h1')[0]->nodeValue;

// Save the extracted data to a file
$file = fopen('extracted_data.txt', 'w');
fwrite($file, $data);
fclose($file);

echo 'Data extracted and saved successfully!';
?>