Discussion:
[NFS] nfs server
(too old to reply)
Jeremy MAURO
2011-08-03 14:12:55 UTC
Permalink
Hi everyone,

I wondering if anyone has managed to setup a nfs server in a LXC (Linux
distro: Debian squeeze)?

Regards,
JM
Jeremy MAURO
2011-08-09 08:03:19 UTC
Permalink
Hi everyone,

Any updates about this request, because I am unable to setup a nfs
server on the container without a nfs-server on the main server.
Once the nfs-kernel-server is setup on the main server, I am still
unable to mount the nfs exports:
On the main server:
[root at server]:/var/lib/lxc/nfsroot/rootfs/var/lib # showmount -e localhost
Export list for localhost:
[root at server]:/var/lib/lxc/nfsroot/rootfs/var/lib #

On the container:
[root at nfsroot]:~ # showmount -e localhost
Export list for localhost:
/var/nfsroot 10.10.0.0/16
[root at nfsroot]:~ # showmount -e 10.10.200.31
Export list for 10.10.200.31:
/var/nfsroot 10.10.0.0/16

And when trying to mount the export, the clients just hangs
[root at client-server]:/ # mount 10.10.200.31:/var/nfsroot /mnt/

Any idea?


Regards,
JM
Post by Jeremy MAURO
Hi everyone,
I wondering if anyone has managed to setup a nfs server in a LXC (Linux
distro: Debian squeeze)?
Regards,
JM
------------------------------------------------------------------------------
BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts.
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos& much more. Register early& save!
http://p.sf.net/sfu/rim-blackberry-1
_______________________________________________
Lxc-users mailing list
Lxc-users at lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users
Daniel Lezcano
2011-08-09 13:10:44 UTC
Permalink
Post by Jeremy MAURO
Hi everyone,
Any updates about this request, because I am unable to setup a nfs
server on the container without a nfs-server on the main server.
Once the nfs-kernel-server is setup on the main server, I am still
[root at server]:/var/lib/lxc/nfsroot/rootfs/var/lib # showmount -e localhost
[root at server]:/var/lib/lxc/nfsroot/rootfs/var/lib #
[root at nfsroot]:~ # showmount -e localhost
/var/nfsroot 10.10.0.0/16
[root at nfsroot]:~ # showmount -e 10.10.200.31
/var/nfsroot 10.10.0.0/16
And when trying to mount the export, the clients just hangs
[root at client-server]:/ # mount 10.10.200.31:/var/nfsroot /mnt/
Any idea?
I think the nfs server is not yet supported by the kernel.
AFAIR Kirill had a patchset to be merged mainline but I don't know what
is the status.

Thanks
-- Daniel
Kirill A. Shutemov
2011-08-09 13:27:30 UTC
Permalink
Post by Daniel Lezcano
Post by Jeremy MAURO
Hi everyone,
Any updates about this request, because I am unable to setup a nfs
server on the container without a nfs-server on the main server.
Once the nfs-kernel-server is setup on the main server, I am still
[root at server]:/var/lib/lxc/nfsroot/rootfs/var/lib # showmount -e localhost
[root at server]:/var/lib/lxc/nfsroot/rootfs/var/lib #
[root at nfsroot]:~ # showmount -e localhost
/var/nfsroot 10.10.0.0/16
[root at nfsroot]:~ # showmount -e 10.10.200.31
/var/nfsroot 10.10.0.0/16
And when trying to mount the export, the clients just hangs
[root at client-server]:/ # mount 10.10.200.31:/var/nfsroot /mnt/
Any idea?
I think the nfs server is not yet supported by the kernel.
AFAIR Kirill had a patchset to be merged mainline but I don't know what
is the status.
AFAIK, upstream has no objections on this patchset, but they don't merge
it. I don't know why.

Anyway, it's only a small part of work which need to be done to get nfs work
properly in containers.

Rob, do you have any status update? I don't follow nfs maillist currently.
--
Kirill A. Shutemov
Rob Landley
2011-08-09 15:44:23 UTC
Permalink
Post by Kirill A. Shutemov
Post by Daniel Lezcano
Post by Jeremy MAURO
Hi everyone,
Any updates about this request, because I am unable to setup a nfs
server on the container without a nfs-server on the main server.
Once the nfs-kernel-server is setup on the main server, I am still
[root at server]:/var/lib/lxc/nfsroot/rootfs/var/lib # showmount -e localhost
[root at server]:/var/lib/lxc/nfsroot/rootfs/var/lib #
[root at nfsroot]:~ # showmount -e localhost
/var/nfsroot 10.10.0.0/16
[root at nfsroot]:~ # showmount -e 10.10.200.31
/var/nfsroot 10.10.0.0/16
And when trying to mount the export, the clients just hangs
[root at client-server]:/ # mount 10.10.200.31:/var/nfsroot /mnt/
Any idea?
I think the nfs server is not yet supported by the kernel.
AFAIR Kirill had a patchset to be merged mainline but I don't know what
is the status.
AFAIK, upstream has no objections on this patchset, but they don't merge
it. I don't know why.
Anyway, it's only a small part of work which need to be done to get nfs work
properly in containers.
Rob, do you have any status update? I don't follow nfs maillist currently.
P.S. I believe the fies to make cifs and p9 work already went in, and
FUSE already did, but it's been a while and I'll have to retest. (I
know I got 'em all to work, I _think_ all changes necessary to do so
went upstream.) If you're not tied to NFS, you have several options.

(Getting NFSv4 to work was a crawling horror due to its horrible
overcomplicated design wanting to merge different mount points into the
same superblock without even using the --bind mount mechanism, make
callbacks to kernel threads and userspace with no obvious ownership
rules... NFSv3 was a piece of cake in comparison, and I don't think I
ever got lockd to work properly there either. Of course, other network
filesystems never needed it...)

Rob
zorg
2011-08-12 06:57:37 UTC
Permalink
Post by Rob Landley
Post by Kirill A. Shutemov
Post by Daniel Lezcano
Post by Jeremy MAURO
Hi everyone,
Any updates about this request, because I am unable to setup a nfs
server on the container without a nfs-server on the main server.
Once the nfs-kernel-server is setup on the main server, I am still
[root at server]:/var/lib/lxc/nfsroot/rootfs/var/lib # showmount -e localhost
[root at server]:/var/lib/lxc/nfsroot/rootfs/var/lib #
[root at nfsroot]:~ # showmount -e localhost
/var/nfsroot 10.10.0.0/16
[root at nfsroot]:~ # showmount -e 10.10.200.31
/var/nfsroot 10.10.0.0/16
And when trying to mount the export, the clients just hangs
[root at client-server]:/ # mount 10.10.200.31:/var/nfsroot /mnt/
Any idea?
I think the nfs server is not yet supported by the kernel.
AFAIR Kirill had a patchset to be merged mainline but I don't know what
is the status.
AFAIK, upstream has no objections on this patchset, but they don't merge
it. I don't know why.
Anyway, it's only a small part of work which need to be done to get nfs work
properly in containers.
Rob, do you have any status update? I don't follow nfs maillist currently.
P.S. I believe the fies to make cifs and p9 work already went in, and
FUSE already did, but it's been a while and I'll have to retest. (I
know I got 'em all to work, I _think_ all changes necessary to do so
went upstream.) If you're not tied to NFS, you have several options.
(Getting NFSv4 to work was a crawling horror due to its horrible
overcomplicated design wanting to merge different mount points into the
same superblock without even using the --bind mount mechanism, make
callbacks to kernel threads and userspace with no obvious ownership
rules... NFSv3 was a piece of cake in comparison, and I don't think I
ever got lockd to work properly there either. Of course, other network
filesystems never needed it...)
Rob
------------------------------------------------------------------------------
uberSVN's rich system and user administration capabilities and model
configuration take the hassle out of deploying and managing Subversion and
the tools developers use with it. Learn more about uberSVN and get a free
download at: http://p.sf.net/sfu/wandisco-dev2dev
_______________________________________________
Lxc-users mailing list
Lxc-users at lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users
Hello,

does this mean that there is no chance ever to make a nfs4 server
working in a lxc container

zorg
Daniel Lezcano
2011-08-12 19:35:40 UTC
Permalink
On 08/12/2011 08:57 AM, zorg wrote:

[ ... ]
Post by zorg
Post by Rob Landley
P.S. I believe the fies to make cifs and p9 work already went in, and
FUSE already did, but it's been a while and I'll have to retest. (I
know I got 'em all to work, I _think_ all changes necessary to do so
went upstream.) If you're not tied to NFS, you have several options.
(Getting NFSv4 to work was a crawling horror due to its horrible
overcomplicated design wanting to merge different mount points into the
same superblock without even using the --bind mount mechanism, make
callbacks to kernel threads and userspace with no obvious ownership
rules... NFSv3 was a piece of cake in comparison, and I don't think I
ever got lockd to work properly there either. Of course, other network
filesystems never needed it...)
Rob
------------------------------------------------------------------------------
uberSVN's rich system and user administration capabilities and model
configuration take the hassle out of deploying and managing Subversion and
the tools developers use with it. Learn more about uberSVN and get a free
download at: http://p.sf.net/sfu/wandisco-dev2dev
_______________________________________________
Lxc-users mailing list
Lxc-users at lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users
Hello,
does this mean that there is no chance ever to make a nfs4 server
working in a lxc container
This is not what I read. I think it is just saying it's more difficult
to implement.
Gary Ballantyne
2011-08-15 05:56:32 UTC
Permalink
Hi

Going back through the list, I couldn't find whether this has been resolved.

I had a similar problem today with a little over 40 containers:

# lxc-start -n gary
lxc-start: Too many open files - failed to inotify_init
lxc-start: failed to add utmp handler to mainloop
lxc-start: mainloop exited with an error
lxc-start: Device or resource busy - failed to remove cgroup '/cgroup/gary'

(On Ubuntu 10.10 (EC2), LXC 0.7.2. Installed with this recipe:
http://www.phenona.com/blog/using-lxc-linux-containers-in-amazon-ec2/ )

Appreciate your thoughts.

Cheers,

Gary
I could paste my configuration files if you think it'd help you
reproducing the issue.
Yes, please :)
Ok. The test host has a br0 interface which is not attached to any
auto br0
iface br0 inet static
address 192.168.0.1
netmask 255.255.0.0
broadcast 192.168.255.255
bridge_stp off
bridge_maxwait 5
pre-up /usr/sbin/brctl addbr br0
post-up /usr/sbin/brctl setfd br0 0
post-down /usr/sbin/brctl delbr br0
I use NAT for container access, translating to the host's eth0 address.
There is also a MARK rule that I use for bandwidth limiting. These
iptables -t mangle -A PREROUTING -i br0 -j MARK --set-mark 2
iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source $ETH0_IP
iptables -P FORWARD DROP
iptables -A FORWARD -i br0 -o eth0 -j ACCEPT
iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
tc qdisc add dev eth0 root handle 1: htb
I'm using a custom container creation script based on the ubuntu
http://andre.people.digirati.com.br/lxc-create.sh
It sets up the bandwidth limit for each container and populates the
container's rootfs (there is a usage message :). It creates
lxc.utsname = c2
lxc.network.type = veth
lxc.network.link = br0
lxc.network.flags = up
lxc.network.ipv4 = 192.168.0.2/16 192.168.255.255
lxc.network.name = eth0
lxc.network.veth.pair = veth0.2
lxc.tty = 4
lxc.pts = 1024
lxc.rootfs = /var/lib/lxc/c2/rootfs
lxc.mount = /var/lib/lxc/c2/fstab
lxc.cgroup.devices.deny = a
# /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
# consoles
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
#lxc.cgroup.devices.allow = c 4:0 rwm
#lxc.cgroup.devices.allow = c 4:1 rwm
#//dev//{,u}random
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
# rtc
lxc.cgroup.devices.allow = c 254:0 rwm
# capabilities
lxc.cap.drop = audit_control audit_write fsetid kill ipc_lock
ipc_owner lease linux_immutable mac_admin mac_override net_bind_service
mknod setfcap setpcap sys_admin sys_boot sys_module sys_nice sys_pacct
sys_ptrace sys_rawio sys_resource sys_time sys_tty_config
/bin /var/lib/lxc/c2/rootfs/bin ext4 bind,ro 0 0
/lib /var/lib/lxc/c2/rootfs/lib ext4 bind,ro 0 0
/lib64 /var/lib/lxc/c2/rootfs/lib64 ext4 bind,ro 0 0
/sbin /var/lib/lxc/c2/rootfs/sbin ext4 bind,ro 0 0
/usr /var/lib/lxc/c2/rootfs/usr ext4 bind,ro 0 0
/etc/environment /var/lib/lxc/c2/rootfs/etc/environment none bind,ro 0
0
/etc/resolv.conf /var/lib/lxc/c2/rootfs/etc/resolv.conf none bind,ro 0
0
/etc/localtime /var/lib/lxc/c2/rootfs/etc/localtime none bind,ro 0 0
/etc/network/if-down.d /var/lib/lxc/c2/rootfs/etc/network/if-down.d
none bind,ro 0 0
/etc/network/if-post-down.d /var/lib/lxc/c2/rootfs/etc/network/if-post-down.d none bind,ro 0 0
/etc/network/if-pre-up.d /var/lib/lxc/c2/rootfs/etc/network/if-pre-up.d none bind,ro 0 0
/etc/network/if-up.d /var/lib/lxc/c2/rootfs/etc/network/if-up.d none
bind,ro 0 0
/etc/login.defs /var/lib/lxc/c2/rootfs/etc/login.defs none bind,ro 0 0
/etc/securetty /var/lib/lxc/c2/rootfs/etc/securetty none bind,ro 0 0
/etc/pam.conf /var/lib/lxc/c2/rootfs/etc/pam.conf none bind,ro 0 0
/etc/pam.d /var/lib/lxc/c2/rootfs/etc/pam.d none bind,ro 0 0
/etc/security /var/lib/lxc/c2/rootfs/etc/security none bind,ro 0 0
/etc/alternatives /var/lib/lxc/c2/rootfs/etc/alternatives none bind,ro
0 0
proc /var/lib/lxc/c2/rootfs/proc proc ro,nodev,noexec,nosuid 0 0
devpts /var/lib/lxc/c2/rootfs/dev/pts devpts defaults 0 0
sysfs /var/lib/lxc/c2/rootfs/sys sysfs defaults 0 0
I think that's all. If you need any more info feel free to ask :)
Thanks Andre !
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20110815/a4470851/attachment.html>
Jäkel, Guido
2011-08-15 07:52:46 UTC
Permalink
Post by Gary Ballantyne
Hi
Going back through the list, I couldn't find whether this has been resolved.
# lxc-start -n gary
lxc-start: Too many open files - failed to inotify_init
lxc-start: failed to add utmp handler to mainloop
lxc-start: mainloop exited with an error
lxc-start: Device or resource busy - failed to remove cgroup '/cgroup/gary'
Dear Gary,

did you (re-)configure /etc/security/limits.conf on the lxc host to have an adequate value for filehandles in such an environment? E.g.:

[...]
* hard nofile 65536
* soft nofile 65000
[...]

Greetings

Guido
Gary Ballantyne
2011-08-15 18:38:32 UTC
Permalink
Post by Jäkel, Guido
Post by Gary Ballantyne
Hi
Going back through the list, I couldn't find whether this has been resolved.
# lxc-start -n gary
lxc-start: Too many open files - failed to inotify_init
lxc-start: failed to add utmp handler to mainloop
lxc-start: mainloop exited with an error
lxc-start: Device or resource busy - failed to remove cgroup '/cgroup/gary'
Dear Gary,
[...]
* hard nofile 65536
* soft nofile 65000
[...]
Greetings
Guido
Thanks Guido.

I didn't know about that. I made the change you suggested (there was a
slight snag for ubuntu:
http://serverfault.com/questions/235356/open-file-descriptor-limits-conf-setting-isnt-read-by-ulimit-even-when-pam-limit
). ulimit is pasted below.

Unfortunately, I am still getting the same errors with a little over 40
containers.

Any further ideas?

Cheers

Gary

# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 20
file size (blocks, -f) unlimited
pending signals (-i) 16382
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 65000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Andre Nathan
2011-08-15 18:52:15 UTC
Permalink
Hi Gary
Post by Gary Ballantyne
Unfortunately, I am still getting the same errors with a little over 40
containers.
I also had this problem. It was solved after Daniel suggested me to
increase the following sysctl setting:

fs.inotify.max_user_instances

HTH,
Andre
Gary Ballantyne
2011-08-15 20:05:05 UTC
Permalink
Post by Andre Nathan
Hi Gary
Post by Gary Ballantyne
Unfortunately, I am still getting the same errors with a little over 40
containers.
I also had this problem. It was solved after Daniel suggested me to
fs.inotify.max_user_instances
HTH,
Andre
Hi Andre

That did it, thanks very much.

With:

echo 1024 > /proc/sys/fs/inotify/max_user_instances

I can fire up (at least) 100 containers.

Cheers

Gary
Daniel Lezcano
2011-08-15 22:39:08 UTC
Permalink
Post by Gary Ballantyne
Post by Andre Nathan
Hi Gary
Post by Gary Ballantyne
Unfortunately, I am still getting the same errors with a little over 40
containers.
I also had this problem. It was solved after Daniel suggested me to
fs.inotify.max_user_instances
HTH,
Andre
Hi Andre
That did it, thanks very much.
echo 1024 > /proc/sys/fs/inotify/max_user_instances
I can fire up (at least) 100 containers.
FYI, maximum number of containers I reached was 1024 (the hard limit for
the number of bridge ports). I did not try to run more.
Jäkel, Guido
2011-08-16 06:47:24 UTC
Permalink
Dear Daniel,

What about to add little hints to such error messages; something like

"Too many open files - failed to inotify_init. You may have to increase the value of fs.inotify.max_user_instances"


And maybe this possible traps should be pointed out in the man page.


Another trap is a deficit of entropy, which may result into slow startup or freeze of containers (precise it's applications, e.g. the probably ever-present sshd)

watch -n 1 cat /proc/sys/kernel/random/entropy_avail

To my experience on a typical server running a bunch of ssl-enabled daemons this happens quite often; I use rngd to avoid this.

Guido
-----Original Message-----
From: Daniel Lezcano [mailto:daniel.lezcano at free.fr]
Sent: Tuesday, August 16, 2011 12:39 AM
To: Gary Ballantyne
Cc: lxc-users at lists.sourceforge.net
Subject: [Spam-Wahrscheinlichkeit=45]Re: [Lxc-users] Many containers and too many open files
Post by Gary Ballantyne
Post by Andre Nathan
Hi Gary
Post by Gary Ballantyne
Unfortunately, I am still getting the same errors with a little over 40
containers.
I also had this problem. It was solved after Daniel suggested me to
fs.inotify.max_user_instances
HTH,
Andre
Hi Andre
That did it, thanks very much.
echo 1024 > /proc/sys/fs/inotify/max_user_instances
I can fire up (at least) 100 containers.
FYI, maximum number of containers I reached was 1024 (the hard limit for
the number of bridge ports). I did not try to run more.
------------------------------------------------------------------------------
uberSVN's rich system and user administration capabilities and model
configuration take the hassle out of deploying and managing Subversion and
the tools developers use with it. Learn more about uberSVN and get a free
download at: http://p.sf.net/sfu/wandisco-dev2dev
_______________________________________________
Lxc-users mailing list
Lxc-users at lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users
Serge Hallyn
2011-08-16 13:16:43 UTC
Permalink
Post by Jäkel, Guido
Dear Daniel,
What about to add little hints to such error messages; something like
"Too many open files - failed to inotify_init. You may have to increase the value of fs.inotify.max_user_instances"
And maybe this possible traps should be pointed out in the man page.
Another trap is a deficit of entropy, which may result into slow startup or freeze of containers (precise it's applications, e.g. the probably ever-present sshd)
watch -n 1 cat /proc/sys/kernel/random/entropy_avail
To my experience on a typical server running a bunch of ssl-enabled daemons this happens quite often; I use rngd to avoid this.
That sounds very helpful. Would you mind sending patches for those?

thanks,
-serge

Rob Landley
2011-08-09 15:40:50 UTC
Permalink
Post by Kirill A. Shutemov
Post by Daniel Lezcano
Post by Jeremy MAURO
Hi everyone,
Any updates about this request, because I am unable to setup a nfs
server on the container without a nfs-server on the main server.
Once the nfs-kernel-server is setup on the main server, I am still
[root at server]:/var/lib/lxc/nfsroot/rootfs/var/lib # showmount -e localhost
[root at server]:/var/lib/lxc/nfsroot/rootfs/var/lib #
[root at nfsroot]:~ # showmount -e localhost
/var/nfsroot 10.10.0.0/16
[root at nfsroot]:~ # showmount -e 10.10.200.31
/var/nfsroot 10.10.0.0/16
And when trying to mount the export, the clients just hangs
[root at client-server]:/ # mount 10.10.200.31:/var/nfsroot /mnt/
Any idea?
I think the nfs server is not yet supported by the kernel.
AFAIR Kirill had a patchset to be merged mainline but I don't know what
is the status.
AFAIK, upstream has no objections on this patchset, but they don't merge
it. I don't know why.
Anyway, it's only a small part of work which need to be done to get nfs work
properly in containers.
Rob, do you have any status update? I don't follow nfs maillist currently.
Not that I know of, I submitted the NFSv3 fixes to the NFS guys ~3 times
and never got a response. (As far as I could tell I didn't address
NFSv4, therefore they didn't care.)

Touch swamped at the moment but I can dig up and resubmit this weekend
if you'd like.

Rob
Daniel Lezcano
2011-08-09 15:58:41 UTC
Permalink
Post by Rob Landley
Post by Kirill A. Shutemov
Post by Daniel Lezcano
Post by Jeremy MAURO
Hi everyone,
Any updates about this request, because I am unable to setup a nfs
server on the container without a nfs-server on the main server.
Once the nfs-kernel-server is setup on the main server, I am still
[root at server]:/var/lib/lxc/nfsroot/rootfs/var/lib # showmount -e localhost
[root at server]:/var/lib/lxc/nfsroot/rootfs/var/lib #
[root at nfsroot]:~ # showmount -e localhost
/var/nfsroot 10.10.0.0/16
[root at nfsroot]:~ # showmount -e 10.10.200.31
/var/nfsroot 10.10.0.0/16
And when trying to mount the export, the clients just hangs
[root at client-server]:/ # mount 10.10.200.31:/var/nfsroot /mnt/
Any idea?
I think the nfs server is not yet supported by the kernel.
AFAIR Kirill had a patchset to be merged mainline but I don't know what
is the status.
AFAIK, upstream has no objections on this patchset, but they don't merge
it. I don't know why.
Anyway, it's only a small part of work which need to be done to get nfs work
properly in containers.
Rob, do you have any status update? I don't follow nfs maillist currently.
Not that I know of, I submitted the NFSv3 fixes to the NFS guys ~3 times
and never got a response. (As far as I could tell I didn't address
NFSv4, therefore they didn't care.)
Touch swamped at the moment but I can dig up and resubmit this weekend
if you'd like.
That would be great ! Can you Cc me ?
Continue reading on narkive:
Loading...