Skip to content

Optimize HTML cache pipeline for minimum latency and maximum throughput (PHP 8.3+)#203

Draft
Copilot wants to merge 2 commits intomainfrom
copilot/optimize-html-cache-performance
Draft

Optimize HTML cache pipeline for minimum latency and maximum throughput (PHP 8.3+)#203
Copilot wants to merge 2 commits intomainfrom
copilot/optimize-html-cache-performance

Conversation

Copy link
Copy Markdown

Copilot AI commented Mar 10, 2026

The HTML cache drop-in lacked pre-compressed serving, critical HTTP caching headers, and several correctness issues that left performance and browser/CDN cacheability on the table. The cache lifetime was also hardcoded at 3600 s in the drop-in, ignoring the admin setting.

advanced-cache-template.php — hot path (every cached request)

  • Pre-compressed gzip serving: serves index.html.gz directly when Accept-Encoding: gzip is present and the file exists, eliminating per-request compression CPU entirely. Disables zlib.output_compression to prevent double-encoding.
  • Full HTTP caching headers: adds Last-Modified, Expires, Content-Length, Vary: Accept-Encoding.
  • If-Modified-Since support: 304 on date-based conditional GETs, not just ETag. Fixed comparison to < (not <=).
  • Apache-style ETag: "<hex-mtime>-<hex-size>" — includes file size so same-second rewrites invalidate correctly.
  • Runtime config: reads WPSC_CACHE_DIR/config.php (written by plugin on activation/save) so cache_lifetime tracks the admin setting.
  • PHP 8.3 typed class constants; str_starts_with / str_contains / str_ends_with; is_file() over file_exists(); parse_url false-safety (?: not ??).

HTMLCache.php — write path (once per uncached request)

  • Writes .html.gz alongside .html: gzencode($content, 6) via atomicWrite, so the drop-in always has a pre-compressed sibling to serve.
  • HTML minification: strips non-conditional HTML comments and collapses inter-tag whitespace runs. Applied after all other processors. preg_replace null-return guarded to avoid content corruption on invalid UTF-8.
  • text/html content-type guard: skips caching JSON/XML/REST responses.
  • Fix libxml_use_internal_errors leak: saves and restores previous state; was permanently silencing libxml errors for the remainder of the request.
  • PHP 8.3 typed class constants on BYPASS_PARAMS, MAX_QUERY_LEN, IGNORED_EXTENSIONS.

AbstractCacheDriver.php

  • LOCK_EX added to file_put_contents in atomicWrite for safer concurrent writes.

Plugin.php

  • installAdvancedCache() always overwrites: was a no-op after first install, silently leaving a stale drop-in after plugin updates.
  • writeAdvancedCacheConfig(): serialises ['cache_lifetime' => N] to WPSC_CACHE_DIR/config.php using var_export + LOCK_EX. Called on activation and on wpscac_settings_updatedrefreshServerConfig(). Logs failure under WP_DEBUG.
Original prompt

Please find out if the current workflow gives us the best possible html/documentation latency html cache speed possible, for best latency and maximum troughput for cached websites.
Try to optimized the code aswell and find issues in that files/area around this topic. Use php 8.3+ newest optimizations and so on to give the msot performant code possible here

The user has attached the following file paths as relevant context:

  • WPS-Cache\includes\advanced-cache-template.php
  • WPS-Cache\src\Plugin.php
  • WPS-Cache\wps-cache.php
  • WPS-Cache\src\Cache\Drivers\HTMLCache.php

Created from VS Code.


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Co-authored-by: Jumaron <41344875+Jumaron@users.noreply.github.com>
Copilot AI changed the title [WIP] Optimize HTML cache for better latency and throughput Optimize HTML cache pipeline for minimum latency and maximum throughput (PHP 8.3+) Mar 10, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants