wget only downloads index - terminal

I'm trying to download the contents of a website. Ideally I would be able to browse the website offline, but I the least I hope to download the pdfs hosted there without individually downloading them all. In the past wget --mirror -p --html-extension --convert-links --content-disposition https://travelcasper.wixsite.com/samuelfu has been sufficient for other sites, but on this site I only get a single .html file which opens totally blank.
But I have now beefed up the code to
wget --mirror -p --span-hosts --html-extension --convert-links --content-disposition -e robots=off -U 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv: Gecko/20070802 SeaMonkey/1.1.4' -r --no-check-certificate --no-cookies --no-parent --random-wait https://travelcasper.wixsite.com/samuelfu
with only a few extra .js files downloading, and still the .html file opens to a blank page.
Thanks for the help!


Blank page on installing Grav

I have download grav zip file and extract it to my web server or local host.iam using fedora OS with PHP version 5.6.23.on navigating localhost/grav in browser,it shows a blank page.can anyone help me to solve this?
Give read and write permission for the grav (your grav site name)folder.
From terminal, use following commands for fedora machine if u r running in localhost.
sh-4.4$ cd /var/www/html/
sh-4.4$ chmod -R 777 grav
Check /var/log/httpd/error_log, if you see errors about file permission, maybe SELinux prevented web server from writing configuration files. This often happens to a fresh installation of Fedora/apache.
You can try this:
From terminal, run su -c "dnf install policycoreutils-python-utils" to install policycoreutils-python-utils package which provides semanage
Run su -c "semanage permissive -a httpd_t" to ask SELinux to monitor apache in permissive mode and allow apache to write to its public folder.
Now refresh the page to see if Grav is now running.
If this doesn't work and you want to reserve the command semanage permissive -a httpd_t, you run semanage permissive -d httpd_t (more details)

Docker for Mac nginx example doesn't run

Mac 10.11.5 here. I am specifically trying to install Docker for Mac (not Docker Toolbox or any other offering). I followed all the instructions on their Installation page, and everything was going fine until they ask you to try running an nginx server (Step 3. Explore the application and run examples).
Running docker run hello-world worked beautifully with no problems at all. I was able to see the correct console output that was expected for that image.
However, they then ask you to try running an nginx instance:
docker run -d -p 80:80 --name webserver nginx
I ran this, and got no errors. Console output was expected:
Unable to find image 'nginx:latest' locally latest: Pulling from library/nginx
51f5c6a04d83: Pull complete
a3ed95caeb02: Pull complete
51d229e136d0: Pull complete
bcd41daec8cc: Pull complete
Digest: sha256:0fe6413f3e30fcc5920bc8fa769280975b10b1c26721de956e1428b9e2f29d04
Status: Downloaded newer image for nginx:latest
So then I ran docker ps:
ae8ee4595f47 nginx "nginx -g 'daemon off" 12 seconds ago Up 10 seconds>80/tcp, 443/tcp webserver
So far so good. But then I open up my browser and point it to http://localhost and I get (in Chrome):
Any ideas where I'm going awry? I waited 5 mins just to give nginx/docker ample time to start up, but that doesn't change anything.
For tracking purposes, the related GitHub issue:
The image exposes 80 as the httpd port https://github.com/nginxinc/docker-nginx/blob/11fc019b2be3ad51ba5d097b1857a099c4056213/mainline/jessie/Dockerfile#L25
So using -p 80:80 should work and does work for me:
docker run -p 80:80 nginx - - [22/Aug/2016:17:26:32 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36" "-"
So most probably you either run a httpd container on the host so the port cannot be binded ( should be visible through the start ) or you probably have an issue with localhost - does work? You might have a issue with ipv6 then
Or better using a docker-compose.yml file
version: '2'
image: nginx
- '80:80'
and the start it with docker-compose up - you can know easily add other services, like a tomcat, puma server, or a FPM upstream, whatever app you might have.
curl -XGET `docker-machine ip`:80
I had the same problem when run on Chrome, but it is work on Safari.
After a while pull my hair, i found out that my machine connected to a proxy, and this is problem.
I must stop server, remove image, turn off proxy and pull it again.

Docker run results in “host not found in upstream” error

I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo " my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line " my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly

How do you enable Perl on Ubuntu with Apache2 [closed]

I have installed ubunutu witha apache on my pc and everything works great except: I don't know how to enable perl, everything I have tried either gave me a server error or gave me a 403 for the perl scripts.
Please tell me how to enable Perl. Thanks!
Place your files in /usr/lib/cgi-bin, make them executable and change the owner and group to www-data:
sudo cp myscript.pl /usr/lib/cgi-bin/
sudo chown www-data.www-data /usr/lib/cgi-bin/myscript.pl
sudo chmod 0755 /usr/lib/cgi-bin/myscript.pl
I prefer to enable the "AddHandler cgi-script .cgi" line in /etc/apache2/mods-available/mime.conf by removing the "#" in front of it and setting "Options +ExecCGI" for the directories below /var/www where scripts should be executed. But beware: Everything executable ending with ".cgi" will be executed as a cgi script this way.
The issue is likely not a problem with Perl. Rather, your Apache2 installation may not be configured to parse .pl or .cgi files. You should review the Apache Web Server documentation as well as this SO article.

Curl command not loading contents in windows 7

I have installed Cygwin & curl(through cygwin installer) on my Windows7 32bit. Then opened cygwin terminal and typed curl --help. Everything works fine with curl showing its command arguments list.
But curl http://www.google.com or any other url takes more time and results
"curl: (52) Empty reply from server
". What is the problem?
Update:Iam behind a proxy server. Any pblm with that?
If your proxy server is the only way to get out at the web, then yes, that's the problem. Curl couldn't care less about what OS you're running under, so won't get for OS-specific proxy settings (ie: Internet Explorer options), so it'll try to do a direct connection. Try adding the --proxy option (curl --help will give the format).