Home » Network & Servers » Page 3

I am trying to restore a postgres data dump for a Django app of
mine. The data dump was extracted from Heroku, and pg_restore is being
run on an Azure VM with Linux on it. There are around 40 tables and
the total size doesn't exceed 2GB. I've tried two approaches; both
have failed. Can an expert point out what might be the problem
Note that the postgres data dump is called
latest.dump and resides at /home/myuser/ in
my Linux VM.


I switch user to postgres via sudo su
and then go into psql. There I run
CREATE DATABASE mydatabase;. Next I quit
psql, and run the following command from user postgres:
pg_restore latest.dump -d mydatabase -U postgres. The
process is run, but in the end I get:

WARNING: errors ignored on restore: 75

Virtually all the errors I got were of the 'role does not

pg_restore: [archiver (db)] Error from TOC entry 241; 1259
44416 TABLE links_grouptraffic uauvuro0s8b9v4
pg_restore: [archiver (db)] could not execute query: ERROR: role
"uauvuro0s8b9v4" does not exist
Command was: ALTER TABLE public.links_grouptraffic OWNER TO

Note that 'uauvuro0s8b9v4' is the user on heroku; I haven't created
such a user on Azure.
When I run my Django app at example.cloudapp.net, and I see a
permission denied error. Full body is something like:

Exception Type: DatabaseError Exception Value:

permission denied for relation links_link

Exception Location:

in execute, line 54


This time, I again create a fresh database via CREATE
DATABASE mydatabase;
in psql. Then I come out from
psql, and run python manage.py syncdb (notice I didn't do
that in APPROACH 1). A bunch of tables that are created as a result. I
select yes to 'would I like to create a superuser'. I
give the necessary details and it's created for me. Next, I quickly
run python manage.py migrate djcelery and python
manage.py migrate user_sessions
to migrate two external
packages. Thus my table structure is complete.

I then proceed to run pg_restore latest.dump -d damadam -U
again. This time the command ends with

WARNING: errors ignored on restore: 333.

If I go to example.cloudapp.net to test my app, I get no error, but
no data was restored either (whatsoever). Following
is a sampling of errors which are seen while pg_restore
is running:

1) Relation already exists:

pg_restore: [archiver (db)] Error from TOC entry 242; 1259
44432 SEQUENCE links_groupinvite_id_seq uauvuro0s8b9v4
pg_restore: [archiver (db)] could not execute query: ERROR: relation
"links_groupinvite_id_seq" already exists
Command was: CREATE SEQUENCE links_groupinvite_id_seq

2) Foreign key constraint violated:

pg_restore: [archiver (db)] Error from TOC entry 2572; 0
44416 TABLE DATA links_grouptraffic uauvuro0s8b9v4
pg_restore: [archiver (db)] COPY failed for table
"links_grouptraffic": ERROR: insert or update on table
"links_grouptraffic" violates foreign key constraint

3) Relation already exists:

pg_restore: [archiver (db)] Error from TOC entry 2273; 1259
16773 INDEX links_link_submitter_id uauvuro0s8b9v4
pg_restore: [archiver (db)] could not execute query: ERROR: relation
"links_link_submitter_id" already exists
Command was: CREATE INDEX links_link_submitter_id ON links_link
USING btree (submitter_id);

4) Constraint for relation already exists:

pg_restore: [archiver (db)] Error from TOC entry 2372; 2606
16881 FK CONSTRAINT links_userprofile_user_id_fkey uauvuro0s8b9v4
pg_restore: [archiver (db)] could not execute query: ERROR:
constraint "links_userprofile_user_id_fkey" for relation
"links_userprofile" already exists
Command was: ALTER TABLE ONLY links_userprofile
ADD CONSTRAINT links_userprofile_user_id_fkey FOREIGN KEY
(user_id) REFERENCES auth_u...

Can an expert point out what I'm doing wrong, and what's the right
thing to do here?

Note: please ask for more information in case you need

My question is how can I use setEnv with an allready DEF Env.

For example:


export SSL_ROOT_DIR=/etc/letsencrypt/live
export DEFAULT_HOME_DIR=/var/www/html


SetEnv SERVERNAME domain.tld
DocumentRoot ${HOME_DIR}
<Directory "${HOME_DIR}">
SSLCertificateFile ${SSL_DIR}/cert.pem
SSLCertificateKeyFile ${SSL_DIR}/privkey.pem
SSLCertificateChainFile ${SSL_DIR}/chain.pem

Without env this config work!

Error output

[core:warn] [pid 13844] AH00111: Config variable
${SERVERNAME} is not defined
[core:warn] [pid 13844] AH00111: Config variable ${SERVERNAME} is not
[core:warn] [pid 13844] AH00111: Config variable ${SERVERNAME} is not
[core:warn] [pid 13844] AH00111: Config variable ${SERVERNAME} is not
[core:warn] [pid 13844] AH00111: Config variable ${HOME_DIR} is not
[core:warn] [pid 13844] AH00111: Config variable ${HOME_DIR} is not
[core:warn] [pid 13844] AH00111: Config variable ${LOG_DIR} is not
[core:warn] [pid 13844] AH00111: Config variable ${LOG_DIR} is not
[core:warn] [pid 13844] AH00111: Config variable ${SSL_DIR} is not
[core:warn] [pid 13844] AH00111: Config variable ${SSL_DIR} is not
[core:warn] [pid 13844] AH00111: Config variable ${SSL_DIR} is not

I have vagrant box where is running the nginx + apache-fpm. But
seems like they doesn't work. When I'm trying to get / on my server
I'm receiving 404 HTTP-status with "File not found." body. Error logs

2015/12/25 15:34:00 [error] 9594#0: *9 FastCGI sent in stderr:
"Primary script unknown" while reading response header from
client:, server: local.bodystore.com, request: "GET /
HTTP/1.1", upstream: "fastcgi://", host:

So as I understood this is problem with nginx config-file. I
haven't experience with nginx before so could anyone help me
with it?
Here is the config:


user vagrant;
worker_processes 1;

error_log /var/log/nginx/error.log; pid

events { worker_connections 1024; }

http {

include /etc/nginx/mime.types; default_type

access_log /var/log/nginx/access.log;

sendfile off; tcp_nopush on; tcp_nodelay on;

keepalive_timeout 65;

gzip on; gzip_http_version 1.0; gzip_comp_level 2;
gzip_proxied any; gzip_vary off; gzip_types text/plain text/css
application/x-javascript text/xml application/xml application/rss+xml
application/atom+xml text/javascript application/javascript
application/json text/mathml; gzip_min_length 1000; gzip_disable
"MSIE [1-6].";

server_names_hash_bucket_size 64; types_hash_max_size 2048;
types_hash_bucket_size 64;

include /etc/nginx/conf.d/*.conf; include
/etc/nginx/sites-enabled/*; }

my site config:

server {
listen 80 default;
server_name local.mysite.com;
root /vagrant/src/magento;
access_log /etc/nginx/magento.log;
error_log /etc/nginx/magento_error.log;
location / {
index index.php index.html; # Allow a static html file to be
shown first
try_files $uri $uri/ @handler; # If missing pass the URI to
Magento's front handler
expires 30d; # Assume all files are cachable

# These locations would be hidden by .htaccess normally
location ^~ /app/ { deny all; }
location ^~ /includes/ { deny all; }
location ^~ /lib/ { deny all; }
location ^~ /media/downloadable/ { deny all; }
location ^~ /pkginfo/ { deny all; }
location ^~ /report/config.xml { deny all; }
location ^~ /shell/ { deny all; }
location ^~ /var/ { deny all; }

location /. { # Disable .htaccess and other hidden files
return 404;

location @handler { # Magento uses a common front handler
rewrite / /index.php;

location ~ .php/ { # Forward paths like /js/index.php/x.js to
relevant handler
rewrite ^(.*.php)/ $1 last;

location ~ .php$ { # Execute PHP scripts
if (!-e $request_filename) { rewrite / /index.php last; } #
Catch 404s that try_files miss
try_files $uri =404;
#fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME
include fastcgi_params;
expires off;

fastcgi_param SCRIPT_FILENAME
fastcgi_param MAGE_IS_DEVELOPER_MODE 1;
include fastcgi_params;


As you can see I tried to use unix-sockets instead tcp, but I think
already is fine with it.Need to find issue in the rest part of


I just bought a domain name for my web site from GoDaddy.

the content of the site is available at a public IP address.

how do I link my domain name to the IP address ?

how do I make it so that when someone types in my domain name, the
content of the web server is displayed ?


How do you know if a Site to site VPN tunnel is established in
Apart from pinging the other side, is there a command or something
that shows the status of the tunnel?

I am using centos on both ends.
Very new to this!

I am trying to install .Net 3.5 on Windows Server 2012 and it
constantly keeps failing. I am using "Add or Remove Features" and my
Internet is already there. I've read that if alternate source couldn't
be found, the installer tries to download online and installs it from
there. However, it's not working. This is the screenshot that I keep

enter image<br />
description here

Please suggest what am I missing?


I already tried using dism.exe /online /enable-feature
/featurename:NetFX3 /Source:D:sourcessxs /all
but I do not have
the source disk with me. I want to download it online.

Specifically for Azure VM. What is the recommended way to upgrade
from Server 2012 VM to Server 2012 R2 on the same VM?

I prefer not to have to create a new 2012 R2 VM and really rather
upgrade my existing Server 2012 as I have installed different software
on it and configurations.

This question was also asked last year per href="http://social.msdn.microsoft.com/Forums/windowsazure/en-US/15e8a17d-0004-4337-a74d-1aa47df4e92d/server-2012-r2-upgrade?forum=WAVirtualMachinesforWindows"
and remains unanswered.

I've been trying to set up a debian mail server running postfix,
but when I try to send mail via mail example@outlook.com
the recipient sees the sender as hostname.domain.com. The hostname it
sends does not have an A or MX record set up to it, it is just the
/etc/hostname. I can manually set the sender to admin@mydomain.com if
I send by logging in via telnet localhost 25.

my main.cf

myorigin = mydomian.com
myhostname = mail.mydomain.com
mydestination = mail.mydomain.com, mydomain.com, localhost,
relayhost =
mynetworks = [::ffff:]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases

smtpd_tls_session_cache_database =
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtpd_tls_protocols = !SSLv2, !SSLv3
local_recipient_maps = proxy:unix:passwd.byname $alias_maps

and mail.log when I send out an email

Aug  3 06:28:51 hostname postfix/pickup[7047]: 4D5432023A:
uid=1000 from=<user@hostname>
Aug 3 06:28:51 hostname postfix/cleanup[7065]: 4D5432023A:
Aug 3 06:28:51 hostname postfix/qmgr[7048]: 4D5432023A:
from=<user@hostname.mydomain.com>, size=339, nrcpt=1 (queue
Aug 3 06:28:52 hostname postfix/smtp[7067]: 4D5432023A:
relay=mail.destinationserver.com[IP]:25, delay=1.4,
delays=0.11/0.01/0.49/0.78, dsn=2.0.0, status=sent (250 Queued (0.110
Aug 3 06:28:52 hostname postfix/qmgr[7048]: 4D5432023A: removed

I've tried setting masquerade_domains = mydomain.com
but it gets flagged as spam in thunderbird and gmail when I do

Headers from a message sent by my server:

    Return-Path: user@hostname.mydomain.com
Received: from mail.mydomain.com (DESTINATION [])
by mail.destination.com
; Sun, 3 Aug 2014 08:10:06 +0200
Received: by mail.mydomain.com (Postfix, from userid 1000)
id 6D7A68033A; Sun, 3 Aug 2014 08:10:27 +0200 (CEST)
To: <destination@outlook.com>
Subject: test
X-Mailer: mail (GNU Mailutils 2.99.97)
Message-Id: <20140803061027.6D7A68033A@mail.mydomain.com>
Date: Sun, 3 Aug 2014 08:10:27 +0200 (CEST)
From: user@hostname.mydomain.com


I have a large DFS structure setup between multiple remote sites
and multiple hub servers in a full mesh topology. Each remote site has
its own namespace or namespaces and replication group. I have an
automated PowerShell script that collects the Envrionment Health
Report of all DFS servers on the network and alert me to backlogs in

I noticed this morning that one particular site had a broken
topology connection between itself and it's hub server. The error was
the typical "There is a disconnected topology between servers in this
replication group" or whatever the standard verbiage is in the DFS
Management Console. This had been broken for some time. Our backup
team only backs up from the hub, so the last backup file modify date
on the backup files was around mid-2014. I had no way of knowing this
until I stumbled across it.

Is there something I can run that can trigger an alert on the
topology connections of a DFS setup? Whether it be a report or
something that I can hook SolarWinds into to generate an alert on?

I have a situation where I am able to launch CentOS 6.6 images on a
subnet such that the VM instances get their IP addresses from the
virtual gateway of the subnet. Now this gateway has gone wonky and I
don't have the access to fix it, so I have set up my own DHCP server
on this subnet.

So now there are 2 DHCP servers on this subnet and my VMs are
getting random IP addresses, sometimes from one DHCP server and
sometimes from the other. My question is that how I can configure the
dhcp client on my VMs so that they make DHCP requests to only my DHCP
server rather than the faulty one? man dhcp.conf has not
been very helpful.

- Technology - Languages
+ Webmasters
+ Development
+ Development Tools
+ Internet
+ Mobile Programming
+ Linux
+ Unix
+ Apple
+ Ubuntu
+ Mobile & Tablets
+ Databases
+ Android
+ Network & Servers
+ Operating Systems
+ Coding
+ Design Software
+ Web Development
+ Game Development
+ Access
+ Excel
+ Web Design
+ Web Hosting
+ Web Site Reviews
+ Domain Name
+ Information Security
+ Software
+ Computers
+ Electronics
+ Hardware
+ Windows
+ C/C++/C#
+ VB/VB.Net
+ Javascript
+ Programming
Privacy Policy - Copyrights Notice - Feedback - Report Violation 2018 © BigHow