Categories
Development

OpenAI: Text to Speech

Example how to listen to an article from Heise.

MP3 from Text with OpenAI

Create an .env file with an API Key for OpenAI:

OPENAI_API_KEY=mySecretKey

According to OpenAPI documentation this is a sample code to generate Speech from Text:

from pathlib import Path
from openai import OpenAI
client = OpenAI()

speech_file_path = Path(__file__).parent / "speech.mp3"
response = client.audio.speech.create(
  model="tts-1",
  voice="alloy",
  input="Today is a wonderful day to build something people love!"
)

response.stream_to_file(speech_file_path)

To execute this sample I have to install openai first:

pip install openai

To play the mp3-file I have to install ffmpeg first:

sudo apt install ffmpeg

Create mp3 and play it:

# run sample code
python sample.py
# play soundfile
ffplay speech.mp3

Play MP3 with Python

Install pygame:

pip install pygame
from pathlib import Path
import pygame

def play_mp3(file_path):
    pygame.mixer.init()
    pygame.mixer.music.load(file_path)
    pygame.mixer.music.play()

    # Keep the program running while the music plays
    while pygame.mixer.music.get_busy():
        pygame.time.Clock().tick(10)

# Usage
speech_file_path = Path(__file__).parent / "speech.mp3"
play_mp3(speech_file_path)
python playmp3.py

Read Heise Article

from dotenv import load_dotenv
from pathlib import Path
from openai import OpenAI
import selenium.webdriver as webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup
import pygame

def scrape_website(website):
    print("Launching chrome browser...")
    service = Service()
    options = Options()
    options.headless = True  # Headless-Modus aktivieren, um den Browser unsichtbar zu machen

    driver = webdriver.Chrome(service=service, options=options)

    try:
        driver.get(website)
        print("Page loaded...")
        html = driver.page_source
        return html
    finally:
        driver.quit()


def split_dom_content(dom_content, max_length=6000):
    return [
        dom_content[i : i + max_length] for i in range(0, len(dom_content), max_length)
    ]


def scrape_heise_website(website):
    html  = scrape_website(website)

    # BeautifulSoup zum Parsen des HTML-Codes verwenden
    soup = BeautifulSoup(html, 'html.parser')

    # Artikel-Header und -Inhalt extrahieren
    # Der Header ist oft in einem <h1>-Tag zu finden
    header_title = soup.find('h1', {'class': 'a-article-header__title'}).get_text().strip()
    header_lead  = soup.find('p',  {'class': 'a-article-header__lead'}).get_text().strip()

    # Der eigentliche Artikelinhalt befindet sich oft in einem <div>-Tag mit der Klasse 'article-content'
    article_div = soup.find('div', {'class': 'article-content'})
    paragraphs = article_div.find_all('p') if article_div else []
    # 'redakteurskuerzel' entfernen
    for para in paragraphs:
        spans_to_remove = para.find_all('span', {'class': 'redakteurskuerzel'})
        for span in spans_to_remove:
            span.decompose()  # Entfernt den Tag vollständig aus dem Baum

    article_content = "\n".join([para.get_text().strip() for para in paragraphs])

    return article_content
    # Header und Artikelinhalt ausgeben
    #result = "Header Title:" + header_title + "\nHeader Lead:" + header_lead + "\nContent:" + article_content
    #return result

def article_to_mp3(article_content):
    client = OpenAI()

    speech_file_path = Path(__file__).parent / "speech.mp3"
    response = client.audio.speech.create(
        model="tts-1",
        voice="alloy",
        input=article_content
    )

    response.stream_to_file(speech_file_path)

def play_mp3():
    speech_file_path = Path(__file__).parent / "speech.mp3"
    pygame.mixer.init()
    pygame.mixer.music.load(speech_file_path)
    pygame.mixer.music.play()

    # Keep the program running while the music plays
    while pygame.mixer.music.get_busy():
        pygame.time.Clock().tick(10)


# .env-Datei laden#
load_dotenv()
article_content = scrape_heise_website("https://www.heise.de/news/Streit-ueber-Kosten-Meta-kappt-Leitungen-zur-Telekom-9953162.html")
article_to_mp3(article_content)
play_mp3()
Categories
Development

Python environment variables

For my Python applications I want to use variables from an .env file, so I can store this in a local file for configuration flexibility.

Example:

OPENAI_API_KEY=mySecretKey
import os
from dotenv import load_dotenv

# .env-Datei laden
load_dotenv()
print("OPENAI_API_KEY: " + os.getenv("OPENAI_API_KEY"))
python envexample.py
OPENAI_API_KEY: mySecretKey
Categories
Development

Web Scraping

Web Scraping with Python on my WSL environment.

Python virtual Environment

Open WSL Terminal, switch into the folder for my Web Scraping project and create a virtual environment first.

# Python virtual Environment
## Install
### On Debian/Ubuntu systems, you need to install the python3-venv package
sudo apt install python3.10-venv -y
python3 -m venv ai
## Activate
source ai/bin/activate

Visual Code

Open Visual Code IDE with code .

Change Python Interpreter to the one of virtual Environment

When working in a Terminal window in VS then the virtual environment has also to be activated in this terminal window: source ai/bin/activate

Requirements

I put all required external libraries and their specific versions my project relies on in a seperate file: requirements.txt.
In Python projects this is considered best practice.

selenium

Installation:

pip install -r requirements.txt

Selenium

Selenium is a powerful automation tool for web browsers. It allows you to control web browsers programmatically, simulating user interactions like clicking buttons, filling out forms, and navigating between pages. This makes it ideal for tasks such as web testing, web scraping, and browser automation.

As of Selenium 4.6, Selenium downloads the correct driver for you. You shouldn’t need to do anything. If you are using the latest version of Selenium and you are getting an error, please turn on logging and file a bug report with that information. Quelle

So installation of Google Chrome and Google Chrome Webdriver is not required anymore.
But I had to install some additional libraries on WSL:

sudo apt install libnss3 libgbm1 libasound2

Exkurs: Google Chrome

To find missing libraries I downloaded Google Chrome and tried to start it until all missing libraries were installed.

Page to find download link:

https://googlechromelabs.github.io/chrome-for-testing/#stable

## Google Chrome
wget https://storage.googleapis.com/chrome-for-testing-public/129.0.6668.70/linux64/chrome-linux64.zip
unzip chrome-linux64.zip
mv chrome-linux64 chrome

## Google Chrome Webdriver
wget https://storage.googleapis.com/chrome-for-testing-public/129.0.6668.70/linux64/chromedriver-linux64.zip
unzip chromedriver-linux64.zip
mv chromedriver-linux64 chromedriver
cp chromedriver/chromedriver chrome/chromedriver

cd chrome
./chromedriver

Scrape a single page

import selenium.webdriver as webdriver
from selenium.webdriver.chrome.service import Service

def scrape_website(website):
    print("Launching chrome browser...")
    driver= webdriver.Chrome()

    try:
        driver.get(website)
        print("Page loaded...")
        html = driver.page_source
        return html
    finally:
        driver.quit()

print(scrape_website("https://www.selenium.dev/"))
python scrape.py

Scape a Heise News Article

Extract Article Header, Article Lead and Article itself:

import selenium.webdriver as webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup

def scrape_website(website):
    print("Launching chrome browser...")
    service = Service()
    options = Options()
    options.headless = True  # Headless-Modus aktivieren, um den Browser unsichtbar zu machen
    driver = webdriver.Chrome(service=service, options=options)

    try:
        driver.get(website)
        print("Page loaded...")
        html = driver.page_source
        return html
    finally:
        driver.quit()


def split_dom_content(dom_content, max_length=6000):
    return [
        dom_content[i : i + max_length] for i in range(0, len(dom_content), max_length)
    ]


def scrape_heise_website(website):
    html  = scrape_website(website)

    # BeautifulSoup zum Parsen des HTML-Codes verwenden
    soup = BeautifulSoup(html, 'html.parser')

    # Artikel-Header und -Inhalt extrahieren
    # Der Header ist oft in einem <h1>-Tag zu finden
    header_title = soup.find('h1', {'class': 'a-article-header__title'}).get_text().strip()
    header_lead  = soup.find('p',  {'class': 'a-article-header__lead'}).get_text().strip()

    # Der eigentliche Artikelinhalt befindet sich oft in einem <div>-Tag mit der Klasse 'article-content'
    article_div = soup.find('div', {'class': 'article-content'})
    paragraphs = article_div.find_all('p') if article_div else []
    # 'redakteurskuerzel' entfernen
    for para in paragraphs:
        spans_to_remove = para.find_all('span', {'class': 'redakteurskuerzel'})
        for span in spans_to_remove:
            span.decompose()  # Entfernt den Tag vollständig aus dem Baum

    article_content = "\n".join([para.get_text().strip() for para in paragraphs])
    
    # Header und Artikelinhalt ausgeben
    result = "Header Title:" + header_title + "\nHeader Lead:" + header_lead + "\nContent:" + article_content
    return result

result = scrape_heise_website("https://www.heise.de/news/Streit-ueber-Kosten-Meta-kappt-Leitungen-zur-Telekom-9953162.html")
print(result)
Categories
Development Java

OpenPDF

Als Alternative zu iText und PDFBox habe ich mir OpenPDF angesehen.

Um mich in die Technik einzuarbeiten habe ich mir ein paar Bilder von Pixabay heruntergeladen, ein Projekt auf GitHub angelegt und dann schrittweise ein PDF mit Bildern erzeugt:

Wie man erkennen kann, passen die Bilder nicht in die Zelle sondern stehen oben über.

Ich vermute einen Bug und habe einen Issue eröffnet.