Appendix B: Varnish Programs¶
Varnish Shared memory Log (VSL) tools:
varnishlog
varnishncsa
varnishhist
varnishtop
Administration:
varnishadm
Global counters:
varnishstat
Design and debug:
varnishtest
varnishlog
, varnishadm
and varnishstat
are explained in the Examining Varnish Server’s Output chapter.
Next sections explain varnishtop
, varnishncsa
, and varnishhist
.varnishtop
¶
$varnishtop -i BereqURL,RespStatus
list length 5 trusty-amd64
7.20 RespStatus 200
5.26 RespStatus 404
0.86 BereqURL /test.html
0.68 BereqURL /
0.39 BereqURL /index.html
- Group tags and tag-content by frequency
varnishtop
groups tags and their content together to generate a sorted list of the most frequently appearing tag/tag-content pair.
This tool is sometimes overlooked, because its usefulness is visible after you start filtering.
The above example lists status codes that Varnish returns.
Two of the perhaps most useful variants of varnishtop
are:
varnishtop -i BereqURL
: creates a list of requested URLsx at the backend. Use this to find out which URL is the most fetched.varnishtop -i RespStatus
: lists what status codes Varnish returns to clients.
varnishtop
uses the varnishlog
API, therefore you may also combine tag lists as in the above example.
Even more, you may apply Query Language -q
options.
For example, varnishtop -q 'respstatus > 400'
shows you counters for responses where client seem to have erred.
Some other possibly useful examples are:
varnishtop -i ReqUrl
: displays what URLs are most frequently requested from clients.varnishtop -i ReqHeader -C 'User-Agent:.*Linux.*'
: listsUser-Agent
headers that contain the regular-expression and ignoring caseLinux
string. This example is useful to filter requests from Linux users, since most web browsers in Linux report themselves as Linux.varnishtop -i RespStatus
: lists status codes received in clients from backends.varnishtop -i VCL_call
: shows what VCL functions are used.varnishtop -i ReqHeader -I Referrer
shows the most common referrer addresses.
varnishncsa
¶
10.10.0.1 - - [24/Aug/2008:03:46:48 +0100] "GET \
http://www.example.com/images/foo.png HTTP/1.1" 200 5330 \
"http://www.example.com/" "Mozilla/5.0"
If you already have tools in place to analyze NCSA Common log format, varnishncsa
can be used to print the VSL in this format.
varnishncsa
dumps everything pointing to a certain domain and subdomains.
Filtering works similar to varnishlog
.
varnishhist
¶
1:1, n = 71 localhost
#
#
#
#
##
###
###
###
###
###
| ###
| ###
| | ###
|||| ### #
|||| #### #
|##|##### # # # # #
+-------+-------+-------+-------+-------+-------+-------+-------+-------
|1e-6 |1e-5 |1e-4 |1e-3 |1e-2 |1e-1 |1e0 |1e1 |1e2
The varnishhist
utility reads the VSL and presents a continuously updated histogram showing the distribution of the last n requests.
varnishhist
is particularly useful to get an idea about the performance of your Varnish Cache server and your backend.
The horizontal axis shows a time range from 1e-6 (1 microsecond) to 1e2 (100 seconds). This time range shows the internal processing time of your Varnish Cache server and the time it takes to receive a response from the backend. Thus, this axis does not show the time perceived at the client side, because other factors such as network delay may affect the overall response time.
Hits are marked with a pipe character (“|
”), and misses are marked with a hash character (“#
”).
These markers are distributed according to the time taken to process the request.
Therefore, distributions with more markers on the left side represent a faster performance.
When the histogram grows in vertical direction more than what the terminal can display, the 1:m rate changes on the left top corner. Where m represent the number of times that each marker is found. On the right top corner, you can see the name of the host.
Exercise: Try varnishstat
, varnishlog
and varnishhist
¶
- Send a few requests to Varnish using
http -p hH http://localhost/
- verify you have some cached objects using
varnishstat
- look at the communication with the clients, using
varnishlog
. Try sending various headers and see them appear invarnishlog
. - Install
siege
- Run
siege
againstlocalhost
while looking atvarnishhist
varnishtest
¶
- Script driven program used to test the configuration of Varnish, run regression tests, and develop VMODs
- Useful for system administrators, web developers, and VMODs developers
Varnish is distributed with many utility programs.
varnishtest
is a script driven program that allows you create client mock-ups, simulate transactions, fetch content from mock-up or real backends, interact with your actual Varnish configuration and assert expected behaviors.
You can use varnishtest
when configuring your Varnish installation, i.e., writing VCL code, or developing VMODs.
varnishtest
has its own language: the Varnish Test Case (VTC) language.
This language has a fairly simple syntax.
In fact, when designing your caching algorithm or any other functionality in Varnish, we recommend you first to write Varnish Test Cases (VTCs) as part of your design.
VTCs are also useful to reproduce bugs when filing a bug report.
There are many .vtc
files included in Varnish Cache under bin/varnishtest/tests/
.
Think about those files as a learning source.
Further documentation of varnishtest
is found in its man page, bin/varnishtest/tests/README
and https://www.varnish-cache.org/docs/trunk/reference/varnishtest.html.
The Varnish Test Case (VTC) Language¶
- Test against simulated or real backends
- Starts real instance of
varnishd
- Simulates clients
- Asserts using
expect
expect
supports regular expressions
b00001.vtc
varnishtest "Varnish as Proxy"
server s1 {
rxreq
txresp
} -start
varnish v1 -arg "-b ${s1_addr}:${s1_port}" -start
client c1 {
txreq
rxresp
expect resp.http.via ~ "varnish"
} -run
varnishtest
does not follow the unit testing framework (up/test/assert/tear down) nor behavior-driven development (given/when/then).
Depending on your use case, there might be test preparations, executions and assertions all over the place.
VTC is not compiled but simply interpreted on the fly.
There is a naming convention for VTC files.
Files starting with b
as the example above contain basic functionality tests.
The naming scheme is in Varnish-Cache/bin/varnishtest/tests/README
or https://raw.githubusercontent.com/varnish/Varnish-Cache/master/bin/varnishtest/tests/README.
All VTC programs start by naming the test:
varnishtest "Varnish as Proxy"
In this example we declare a simulated origin server:
server s1 {
rxreq
txresp
} -start
All server declarations must start with s
.
In the code above, s1
receives a request rxreq
, and transmits a response txresp
.
-start
boots s1
and makes available the macros ${s1_addr}
and ${s1_port}
with the IP address and port of your simulated backend.
You may also start a declaration at a later point in your code, for example server s1 -start
.
To declare an instance of your real Varnish server:
varnish v1 -arg "-b ${s1_addr}:${s1_port}" -start
varnish v1
declares an instance of your real Varnish server, i.e., varnishd
.
The names for Varnish servers must start with v
.
This instance is controlled by the manager process, and -start
forks a child, which is the actual cacher process.
You will learn about the manager and cacher in The Parent Process: The Manager and The Parent Process: The Manager sections.
There are many ways to configure varnishd
.
On way is by passing arguments with -arg
as in -arg "-b ${s1_addr}:${s1_port}"
.
-b
is a varnishd
option to define the backend.
In this case, we use the IP address and port of the simulated backend s1
, but you can also use a real backend.
Therefore, varnishtest
can be used as integration tool when testing your real backend.
There are other ways to define backends. The most common one is perhaps by defining them in your VCL code, as we shall see in the next section.
To simulate a client:
client c1 {
txreq
rxresp
expect resp.http.via ~ "varnish"
} -run
Simulated clients in varnishtest
start with c
.
In this example, c1
transmits one request and receives one response.
Since Varnish is a proxy, we expect to receive the response from the backend via Varnish.
Therefore, c1
expects varnish
in the via
HTTP header field.
We use tilde ~
as match operator of regular-expressions because the exact text in resp.http.via
depends on the Varnish version you have installed.
Finally, you start client c1
with the -run
command.
Synchronization in Varnish Tests¶
- Varnish is a multi-threaded program
- Use
-wait
as synchronization mechanism -run
:-start -wait
You might have noticed that we used -start
for v1
, but -run
for c1
.
The difference between these commands is that -run
executes -start -wait
.
Varnish is a multi-threaded program.
Therefore, each instance in varnishtest
, i.e., s1
, v1
and c1
, is executed by a different thread.
Sometimes, you will need some sort of synchronization mechanism to ensure you avoid race conditions or other non-intuitive behaviors.
For those cases, you can use the -wait
command.
-wait
tells the executor of varnishtest
to wait for a given instance to complete before proceeding to the next instruction in your VTC program.
To illustrate this, see the difference between:
varnishtest "Synchronized"
server s1 {
rxreq
txresp
}
server s1 -start
client c1 -connect ${s1_sock} {
txreq
rxresp
}
# -run: -start - wait
client c1 -run
server s1 -wait
and:
varnishtest "Unsynchronized"
server s1 {
rxreq
txresp
}
server s1 -start -wait
client c1 -connect ${s1_sock} {
txreq
rxresp
}
client c1 -run
The second test fails in comparison to the first one, because varnishtest
times out while waiting for s1
to receive a request and transmit a response.
Therefore, you typically start Varnish servers with the -start
command, but start clients with the -run
command.
Note
You will learn more about the Threading Model of Varnish in its own section.
Note
Note that we do not instantiate a Varnish server in the examples, but connect the client directly to the server.
For that purpose we use ${s1_sock}.
This macro translates to the IP address and port of s1
.
Running Your Varnish Test Cases¶
$varnishtest b00001.vtc
# top TEST b00001.vtc passed (1.458)
To run your test, you simply issue the command above.
By default, varnishtest
outputs the summary of passed tests, and a verbose output for failed tests only.
If you want to always get a verbose output, run varnishtest
with the -v
option.
A passed test means that you have the most basic Varnish configuration correct in the testbed varnishtest
.
In the next section we explain how to configure Varnish in the way you normally would do after your tests have passed or when the varnishtest
testbed is not enough for your needs.
There is much more to explain about varnishtest
, but before that, you must learn more about the fundamentals of Varnish.
We will introduce new concepts and make a more advanced use of varnishtest
as we progress in the book.
Exercise: Test Apache as Backend with varnishtest
¶
- Use VTC to test the
Server
andVia
HTTP header fields.
expect
.
If you need help, take a look at Solution: Test Apache as Backend with varnishtest.Setting Parameters in varnishtest
¶
vtc/b00003.vtc
varnishtest "Setting Parameters"
varnish v1 -arg "-b ${bad_ip}:9080"
# Good enough when you are sure that the value is well formatted
# and in a valid range:
varnish v1 -cli "param.set default_ttl 50"
# Setting parameters and asserting correct return status:
varnish v1 -cliok "param.set default_ttl 100"
varnish v1 -clierr "200" "param.set default_ttl 150"
varnish v1 -clierr "106" "param.set default_ttl -1"
# Bad test:
varnish v1 -cli "param.set default_ttl -1"
# Expecting wrong return status. Tests should fail:
# varnish v1 -cliok "param.set default_ttl -1"
# varnish v1 -clierr "106" "param.set default_ttl 20"
# varnish v1 -clierr "200" "param.set default_ttl -1"
Parameters can also be set in varnishtest
.
To execute commands via the CLI, you have three options: -cli "command"
, -cliok "command"
and -clierr "status" "command"
.
-cli
executes a command without checking the return status.
-cliok
executes a command and expects it to return OK 200 status.
-clierr
executes a command and checks whether the expected return status
matches.
Note
You have to instruct varnishtest
to assert the expected behavior as much as you can.
For example, varnish v1 -cli "param.set default_ttl -1"
does not fail because -cli
does not assert the return status.
Note
The macro ${bad_ip} translates to 192.0.2.255.
This IP address is for test use only, and it is used here because we do not need a backend to set parameters in Varnish.
However, we must always declare at least one backend when varnishd
is to be started.
Note
Note that we do not start v1
, because in this example, we do not need to start the cacher process.
Fetch Data with varnishtest
¶
vtc/b00004.vtc
varnishtest "Age greater than 0"
server s1 {
rxreq
txresp
} -start
varnish v1 -arg "-b ${s1_addr}:${s1_port}" -start
client c1 {
txreq
rxresp
delay 1
txreq
rxresp
expect resp.http.Age == 1
} -run
You can use the delay
command in varnishtest
.
The unit of the command are seconds and it also accepts float numbers.
For more information about the Age
response header field refer to the Age subsection.
The Age
value depends on the time to live (TTL) value of the cached object.
We will learn more about it in The Initial Value of beresp.ttl section.
Understanding Expires
in varnishtest
¶
b00008.vtc
varnishtest "Expired objects"
server s1 {
rxreq
txresp -hdr "Date: Thu, 01 Jan 2015 00:00:00 GMT" \
-hdr "Expires: Thu, 01 Jan 2015 00:00:01 GMT"
} -start
varnish v1 -vcl+backend { } -start
varnish v1 -cliok "param.set default_grace 1"
varnish v1 -cliok "param.set default_keep 0" #default
client c1 {
txreq
rxresp
} -run
varnish v1 -expect n_expired == 0
delay 3
varnish v1 -expect n_expired == 1
In Varnish, an expired object is an object that has exceeded the TTL + grace + keep
time.
In the example above, the Expires
header field sets TTL
to 1, and changes default_grace
from 10
to 2
.
default_keep
is already 0
, but we show it explicitly anyway.
Tip
Take a look at s00000.vtc
and s00001.vtc
in Varnish-Cache/bin/varnishtest/tests/
.
Tip
To get more information about n_expire
, issue man varnish-counters
.
Example of Transactions in varnishtest
¶
$varnishtest -v b00001.vtc
(...)
**** v1 0.3 vsl| 0 CLI - Rd vcl.load "boot" (...)
(...)
**** v1 0.4 vsl| 1000 Begin c sess 0 HTTP/1
(...)
**** v1 0.4 vsl| 1002 Begin b bereq 1001 fetch
(...)
**** v1 0.4 vsl| 1002 End b
**** v1 0.4 vsl| 1001 Begin c req 1000 rxreq
(...)
**** v1 0.4 vsl| 1001 End c
(...)
**** v1 0.4 vsl| 1000 End c
(...)
**** v1 0.5 vsl| 0 CLI - EOF on CLI connection (...)
Above is a snippet of how Varnish logs are displayed in varnishtest
.
varnishtest
does not group logs by default as varnishlog
does.
Still, varnishtest
allows you to group the transactions for assertions with the command logexpect
.
varnishtest
starts client transactions in 1000
.
Note the VXID 0
for Varnish specific records.
logexpect
¶
- Allows you to assert log records
- Uses
varnishlog
API - Supports regular expressions
varnishtest "Assert in VSL"
server s1 {
rxreq
txresp
} -start
varnish v1 -arg "-b ${s1_addr}:${s1_port}" -start
logexpect l1 -v v1
logexpect l1 {
expect * * ReqURL /favicon.ico
} -start
client c1 {
txreq -url "/favicon.ico"
rxresp
} -run
logexpect l1 -wait
logexpect
is a program that uses the varnishlog
API.
Therefore, it is able to group and query the Varnishlog just as varnishlog
does.
In addition, logexpect
allows you to assert what you are expecting to appear in VSL.
Note logexpect l1 -wait
at the end of the VTC script above.
Without it, the test would finish successfully without concluding the assert in l1
, because varnishtest
would not wait for it.
-wait
instructs the executor of varnishtest
to wait until l1
is done.
Below is the synopsis of arguments and options of logexpect
:
-v <varnish-instance>
-d <0|1> (head/tail mode)
-g <grouping-mode>
-q <query>
vsl arguments (vsl_arg.c)
-b Only display backend records
-c Only display client records
-C Caseless regular-expressions
-i <taglist> Include tags
-I <[taglist:]regex> Include by regex
-L <limit> Incomplete transaction limit
-T <seconds> Transaction end timeout
logexpect lN -v <id> [-g <grouping>] [-d 0|1] [-q query] [vsl arguments] {
expect <skip> <vxid> <tag> <regex>
}
skip: [uint|*] Max number of record to skip
vxid: [uint|*|=] vxid to match
tag: [tagname|*|=] Tag to match against
regex: regular-expression to match against (optional)
*: Match anything
=: Match value of last successfully matched record
Exercise: Assert Counters in varnishtest
¶
- Write a Varnish test to check the counters for cache misses, cache hits, and number of cached objects.
- Use
cache_miss
,cache_hit
, andn_object
counters respectively.
Understanding Vary
in varnishtest
¶
vtc/c00002.vtc
varnishtest "Test Vary functionality"
server s1 {
# Backend VXID=1002
rxreq
expect req.url == "/same-url"
expect req.http.foobar == "1"
txresp -hdr "Vary: Foobar" -hdr "Snafu: 1" -body "1111\n"
# Backend VXID=1004
rxreq
expect req.url == "/same-url"
expect req.http.foobar == "2"
txresp -hdr "Vary: Foobar" -hdr "Snafu: 2" -body "2222\n"
} -start
varnish v1 -vcl+backend {} -start
client c1 {
txreq -url "/same-url" -hdr "Foobar: 1"
rxresp
expect resp.status == 200
# First client request with VXID=1001
# Request misses. Creates backend request with VXID=1002
expect resp.http.X-Varnish == "1001"
expect resp.http.snafu == "1"
expect resp.body == "1111\n"
txreq -url "/same-url" -hdr "Foobar: 2"
rxresp
expect resp.status == 200
# Client request with VXID=1003
# Request misses. Creates backend request with VXID=1004
expect resp.http.X-Varnish == "1003"
expect resp.http.snafu == "2"
expect resp.body == "2222\n"
txreq -url "/same-url" -hdr "Foobar: 1"
rxresp
expect resp.status == 200
# Client request with VXID=1005
# Request hits cached object created in VXID=1002
expect resp.http.X-Varnish == "1005 1002"
expect resp.http.snafu == "1"
expect resp.body == "1111\n"
} -run
In c00002.vtc
, c1
requests /same-url
three times.
Since the backend s1
returns Vary: Foobar
, Varnish maps the cached object to both req.url
and http.foobar
.
Therefore, the second request misses the cached object and fetches from s1
a new variation mapped to Foobar: 2
.
The third request from c1
matches both req.url
and http.foobar
values from the first request.
Thus, this request does not trigger a backend request.
Recall that X-Varnish
contains the transaction ID of the client request and if applicable, the ID of the backend transaction that stored the object delivered.
You can see this behavior in the third request.
That also means that the VXID counting does not increase to 1006
in the third client request.
If your backend returns Vary:
, it must also handle situations when clients do not send the request header to identify a variation.
For example, when c1
does not send Footbar:
:
txreq -url "/same-url"
rxresp
your backend should handle the lack of that header field specifically. You can test it as the following assertion shows:
rxreq
expect req.http.foobar == <undef>
txresp -hdr "Vary: Foobar" -hdr "Snafu: 3" -body "3333\n"
Be aware that the lack of a header field sent by a client is not the same as sending the field with an empty value. Therefore, requests like:
txreq -hdr "Foobar: "
should be handled in your backend specifically. You can test it as:
rxreq
expect req.http.foobar == ""
txresp -hdr "Vary: Foobar" -hdr "Snafu: 4" -body "4444\n"
c00002.vtc
is a modified version for teaching purposes from Varnish-Cache/bin/varnishtest/tests/c00004.vtc
.
We advise you to look at the many tests included in Varnish-Cache
.
Next we cover four important header fields used in conditional requests.
Two validator fields: ETag
and Last-Modified
; and two precondition header fields: If-None-Match
and If-Modified-Since
.
Understanding Last-Modified
and If-Modified-Since
in varnishtest
¶
vtc/b00007.vtc
varnishtest "Test Backend IMS"
server s1 {
# Request 1
rxreq
txresp -hdr "Last-Modified: Wed, 11 Sep 2013 13:36:55 GMT" \
-body "Geoff Rules"
# Request 2
rxreq
expect req.http.if-modified-since == "Wed, 11 Sep 2013 13:36:55 GMT"
txresp -status 304
# There will be no need to handle a third request in this example.
} -start
varnish v1 -vcl+backend {
sub vcl_backend_response {
set beresp.ttl = 2s;
set beresp.grace = 5s;
# beresp.was_304 is ``true`` if the response from the backend was
# a positive result of a conditional fetch (``304 Not Modified``).
set beresp.http.was-304 = beresp.was_304;
}
} -start
client c1 {
txreq
rxresp
expect resp.status == 200
expect resp.body == "Geoff Rules"
# this was not a conditional fetch
expect resp.http.was-304 == "false"
} -run
delay 3
# Age of object = 3 seconds. TTL has expired, but grace object
# is delivered. Backend fetch in background is triggered
client c1 {
txreq
rxresp
expect resp.status == 200
expect resp.body == "Geoff Rules"
expect resp.http.was-304 == "false"
} -run
delay 1
# Age of object = 1 second. TTL is *not* expired. Fresh cached object
# is delivered from previous backend fetch in background.
client c1 {
txreq
rxresp
expect resp.status == 200
expect resp.body == "Geoff Rules"
expect resp.http.was-304 == "true"
} -run
The example above is a modified version of Varnish-Cache/bin/varnishtest/tests/b00039.vtc
and it shows the usage of Last-Modified
and If-Modified-Since
header fields.
The example introduces how to insert VCL code in varnishtest
:
sub vcl_backend_response {
set beresp.ttl = 2s;
set beresp.grace = 5s;
# beresp.was_304 is ``true`` if the response from the backend was
# a positive result of a conditional fetch (``304 Not Modified``).
set beresp.http.was-304 = beresp.was_304;
}
You will learn all details about VCL in the following sections, but for now it is enough to understand that this code sets the time to live TTL and grace time of cached objects to 2 and 5 seconds respectively. Recall the object lifetime from Fig. 2 to understand the expected behavior.
The code also adds a HTTP response header field was-304
with the boolean value of the beresp.was_304
.
This variable is set to true
if the response from the backend was a positive result of a conditional fetch (304 Not Modified
).
We hope that this exercise motivates you to use varnishtest
when designing your cache policies.
As you can see, varnishtest
is very precise when testing caching objects against different time settings.
Note
beresp.was_304
is a variable available in Varnish 4.1
Understanding Cache-Control
in varnishtest
¶
b00009.vtc
varnishtest "Understanding Cache-Control"
server s1 {
rxreq
txresp -hdr "Cache-control: max-age=3" -body "FOO"
rxreq
txresp -body "FOOBAR"
} -start
varnish v1 -vcl+backend { } -start
varnish v1 -cliok "param.set default_grace 0"
varnish v1 -cliok "param.set default_ttl 1"
client c1 {
txreq
rxresp
expect resp.bodylen == 3
delay 2
txreq
rxresp
expect resp.bodylen == 3
} -run
The example above shows how Cache-control: max-age=3
overwrites TTL
for cached objects.
The default TTL
is 120 seconds, but we set it here to 1 just to explicitly show that the cached object is not expired after a delay of 2 seconds, because max-age=3
.
Therefore, the second assert:
expect resp.bodylen == 3
is 3 but not 6 (size of FOOBAR
).
If you are curious, you can remove:
-hdr "Cache-control: max-age=3"
from the first txresp
, and you will that the second request will contain a body length of 5.
Tip
Take a look at b00941.vtc
, b00956.vtc
and r01578.vtc
in Varnish-Cache/bin/varnishtest/tests/
to learn more.
VCL in varnishtest
¶
vtc/b00000.vtc:
varnish v1 -vcl {
sub vcl_recv {
if (req.url ~ "^/admin/"){
return (pass);
}
}
} -start
varnishtest
allows you to insert VCL code with the -vcl
directive when declaring a Varnish server.
This VCL code is inserted above the subroutines in built-in code in {varnish-source-code}/bin/varnishd/builtin.vcl
.
Since builtin.vcl
already includes vcl 4.0;
, you do not need to add it in varnishtest
.
varnishtest
allows you insert VCL code from an external file using the include "foo.vcl";
directive, or load VMODs using the import foo;
directive.
For examples on how to use include
and import
, refer to the available VTC files in your Varnish distribution under the directory varnish-cache-plus/bin/varnishtest/tests/
.
PURGE
in varnishtest
¶
vtc/b00012.vtc
varnishtest "PURGE with acl"
server s1 {
rxreq
txresp -hdr "foo: 1"
rxreq
txresp -hdr "foo: 2"
} -start
varnish v1 -vcl+backend {
acl purgers {
"127.0.0.1";
"192.168.0.0"/24;
}
sub vcl_recv {
if (req.method == "PURGE") {
if (!client.ip ~ purgers) {
return (synth(405));
}
return (purge);
}
}
} -start
client c1 {
txreq
rxresp
expect resp.http.foo == 1
txreq
rxresp
expect resp.http.foo == 1
txreq -req PURGE
rxresp
expect resp.msg == "Purged"
} -run
client c1 {
txreq
rxresp
expect resp.http.foo == 2
} -run
The example above is a modification of Varnish-Cache/bin/varnishtest/tests/b00036.vtc
.
In this VTC you can see how the second request of c1
is constructed out from the cached object.
After purging the implicit resource /
, the third request from c1
fetches a new object from s1
.
A quick way for you to double check the acl
authorization is by changing the IP address in it.
So you should see that the PURGE
request coming from localhost
will not pass the condition:
if (!client.ip ~ purgers)
Cache Invalidation in varnishtest
¶
client c1 {
txreq -req BAN
rxres
} -run
PURGE
, BAN
and REFRESH
requests in varnishtest
, so your VCL program acts accordingly.
Remember that you still need specify the requested URL
in txreq
if the URL is other than root /
.
We advise you to search for purge
and ban
in Varnish-Cache/bin/varnishtest/tests/
to learn more on how to invalidate caches.Understanding Grace using varnishtest
¶
Varnish-Cache/bin/varnishtest/tests/b00043.vtc:
varnishtest "Test stale-while-revalidate"
server s1 {
rxreq
txresp -hdr "Cache-Control: max-age=30, stale-while-revalidate=30"
rxreq
txresp -hdr "Cache-Control: max-age=0, stale-while-revalidate=30"
rxreq
txresp -hdr "Cache-Control: max-age=30, stale-while-revalidate=30" \
-hdr "Age: 40"
rxreq
txresp -status 500 \
-hdr "Cache-Control: max-age=30, stale-while-revalidate=30"
} -start
varnish v1 -vcl+backend {
sub vcl_backend_response {
set beresp.http.grace = beresp.grace;
set beresp.http.ttl = beresp.ttl;
}
} -start
client c1 {
txreq -url /1
rxresp
expect resp.http.grace == 30.000
expect resp.http.ttl == 30.000
txreq -url /2
rxresp
expect resp.http.grace == 30.000
expect resp.http.ttl == 0.000
txreq -url /3
rxresp
expect resp.http.grace == 30.000
expect resp.http.ttl == -10.000
txreq -url /4
rxresp
expect resp.http.grace == 10.000
expect resp.http.ttl == 0.000
} -run
Cache-Control
sets max-age
to ttl
and stale-while-revalidate
to grace
.
ttl
and grace
are attributes of cached objects.
The VCL code in v1
includes these attributes in the HTTP response header fields http.ttl
and http.grace
that are sent to the client.
c1
asserts the values of these fields.Exercise: Handle Cookies with Vary
and hash_data()
in varnishtest
¶
In this exercise you have to use two cache techniques; first Vary
and then hash_data
.
The exercise uses the Cookie
header field, but the same rules apply to any other field.
Vary: Part 1:
- Write a VTC program that forces Varnish to cache client requests with cookies.
- Send two client requests for the same URL; one for user Alice and one for user Bob.
- Does Varnish use different backend responses to build and deliver the response to the client?
- Make the your simulated server send the
Vary: Cookie
response header field, then analyze the response to the client. - Remove
beresp.http.Vary
in vcl_backend_response and see if Varnish still honors theVary
header.
Vary: Part 2:
- Purge the cached object for resource
/cookies.php
. - Check if it affects all, none or just one of the objects in cache (e.g: change the value of the cookie and see if the
PURGE
method has purged all of them).
hash_data(): Part 1:
- Write another VTC program or add conditions and asserts to differentiate requests handled by
Vary
andhash_data()
. - Add
hash_data(req.http.Cookie);
in vcl_hash. - Check how multiple values of
Cookie
give individual cached objects.
hash_data(): Part 2:
- Purge the cache again and check the result after using
hash_data()
instead ofVary: Cookie
.
Vary
and hash mechanisms.
After this exercise, you should have a very good idea on how Vary
and hash_data()
work.
If you need help, see Solution: Handle Cookies with Vary in varnishtest or Solution: Handle Cookies with Vary in varnishtest.Understanding ESI in varnishtest
¶
Varnish-Cache/bin/varnishtest/tests/e00004.vtc:
varnishtest "ESI commented include"
server s1 {
rxreq
txresp -body {
<html>
Before include
<!--esi <esi:include src="/body"/> -->
After include
}
rxreq
expect req.url == "/body"
txresp -body {
Included file
}
} -start
varnish v1 -vcl+backend {
sub vcl_backend_response {
set beresp.do_esi = true;
}
} -start
client c1 {
txreq
rxresp
expect resp.status == 200
expect resp.bodylen == 67
}
client c1 -run
varnish v1 -expect esi_errors == 0
e00004.vtc
shows how ESI substitution works.
When Varnish reads <!--esi <esi:include src="/body"/> -->
, it triggers a request with URL /body
.
The result of this request replaces the <!--esi -->
tag.
Since there are two different requests, one for the root /
resource and other for the /body
resource, you can handle their cached objects separately.
For example, you can set different TTLs for them or apply different invalidation policies.
We have counted the expected body length after the substitution and assert it in the VTC, but if you do not trust us, you can easily see the replacement by executing:
varnishtest -v e00004.vtc | grep "chunk|"
In the result:
**** c1 0.4 chunk| \n
**** c1 0.4 chunk| \t\t<html>\n
**** c1 0.4 chunk| \t\tBefore include\n
**** c1 0.4 chunk| \t\t
**** c1 0.4 chunk| \n
**** c1 0.4 chunk| \t\tIncluded file\n
**** c1 0.4 chunk| \t \n
**** c1 0.4 chunk| \t\tAfter include\n
**** c1 0.4 chunk| \t
you can see the HTML document after ESI has been processed.